Fusion Middleware

Paketo Buildpacks - Cloud Native Buildpacks providing language runtime support for applications on Kubernetes or Cloud Foundry

Pas Apicella - Thu, 2020-05-07 05:10
Paketo Buildpacks are modular Buildpacks, written in Go. Paketo Buildpacks provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to make image builds easy, performant, and secure.

Paketo Buildpacks implement the Cloud Native Buildpacks specification, an emerging standard for building app container images. You can use Paketo Buildpacks with tools such as the CNB pack CLI, kpack, Tekton, and Skaffold, in addition to a number of cloud platforms.

Here how simple they are to use.

Steps

1. First to get started you need a few things installed the most important is is the Pack CLI and a Docker up and running to allow you to locally create OCI compliant images from your source code

Prerequisites:

    Pack CLI
    Docker

2. Verify pack is installed as follows

$ pack version
0.10.0+git-06d9983.build-259

3. Now in this example below I am going to use a Springboot application source code of mine. The Github URL for that is as follows so you could clone it if you want to follow using this demo.

https://github.com/papicella/msa-apifirst

4. Build my OCI compliant image as follows.

$ pack build msa-apifirst-paketo -p ./msa-apifirst --builder gcr.io/paketo-buildpacks/builder:base
base: Pulling from paketo-buildpacks/builder
Digest: sha256:1bb775a178ed4c54246ab71f323d2a5af0e4b70c83b0dc84f974694b0221d636
Status: Image is up to date for gcr.io/paketo-buildpacks/builder:base
base-cnb: Pulling from paketo-buildpacks/run
Digest: sha256:d70bf0fe11d84277997c4a7da94b2867a90d6c0f55add4e19b7c565d5087206f
Status: Image is up to date for gcr.io/paketo-buildpacks/run:base-cnb
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1
[detector] paketo-buildpacks/executable-jar    1.2.2
[detector] paketo-buildpacks/apache-tomcat     1.1.2
[detector] paketo-buildpacks/dist-zip          1.2.2
[detector] paketo-buildpacks/spring-boot       1.5.2
===> ANALYZING
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:openssl-security-provider" from app image
[analyzer] Restoring metadata for "paketo-buildpacks/bellsoft-liberica:security-providers-configurer" from app image

...

[builder] Paketo Maven Buildpack 1.2.1
[builder]     Set $BP_MAVEN_SETTINGS to configure the contents of a settings.xml file. Default .
[builder]     Set $BP_MAVEN_BUILD_ARGUMENTS to configure the arguments passed to the build system. Default -Dmaven.test.skip=true package.
[builder]     Set $BP_MAVEN_BUILT_MODULE to configure the module to find application artifact in. Default .
[builder]     Set $BP_MAVEN_BUILT_ARTIFACT to configure the built application artifact. Default target/*.[jw]ar.
[builder]     Creating cache directory /home/cnb/.m2
[builder]   Compiled Application: Reusing cached layer
[builder]   Removing source code
[builder]
[builder] Paketo Executable JAR Buildpack 1.2.2
[builder]   Process types:
[builder]     executable-jar: java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     task:           java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]     web:            java -cp "${CLASSPATH}" ${JAVA_OPTS} org.springframework.boot.loader.JarLauncher
[builder]
[builder] Paketo Spring Boot Buildpack 1.5.2
[builder]   Image labels:
[builder]     org.opencontainers.image.title
[builder]     org.opencontainers.image.version
[builder]     org.springframework.boot.spring-configuration-metadata.json
[builder]     org.springframework.boot.version
===> EXPORTING
[exporter] Reusing layer 'launcher'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Reusing layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Reusing layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Reusing 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (726b340b596b):
[exporter]       index.docker.io/library/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:application'
[exporter] Reusing cache layer 'paketo-buildpacks/maven:cache'
[exporter] Reusing cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image msa-apifirst-paketo

5. Now lets run our application locally as shown below

$ docker run --rm -p 8080:8080 msa-apifirst-paketo
Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=113348K -XX:ReservedCodeCacheSize=240M -Xss1M -Xmx423227K (Head Room: 0%, Loaded Class Count: 17598, Thread Count: 250, Total Memory: 1073741824)
Adding Security Providers to JVM

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.1.RELEASE)

2020-05-07 09:48:04.153  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Starting MsaApifirstApplication on 486f85c54667 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2020-05-07 09:48:04.160  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : No active profile set, falling back to default profiles: default

...

2020-05-07 09:48:15.515  INFO 1 --- [           main] p.a.p.m.apifirst.MsaApifirstApplication  : Started MsaApifirstApplication in 12.156 seconds (JVM running for 12.975)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.680  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.682  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.684  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2020-05-07 09:48:15.688  INFO 1 --- [           main] p.apj.pa.msa.apifirst.LoadDatabase       : Preloading Customer(id=4, name=siena, status=inactive)

6. Access the API endpoint using curl or HTTPie as shown below

$ http :8080/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Thu, 07 May 2020 09:49:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://localhost:8080/customers/1"
        },
        "self": {
            "href": "http://localhost:8080/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

It also has a swagger UI endpoint as follows

http://localhost:8080/swagger-ui.html

7. Now you will see as per below you have a locally built OCI compliant image

$ docker images | grep msa-apifirst-paketo
msa-apifirst-paketo                       latest              726b340b596b        40 years ago        286MB

8. Now you can push this OCI compliant image to a Container Registry here I am using Dockerhub

$ pack build pasapples/msa-apifirst-paketo:latest --publish --path ./msa-apifirst
cflinuxfs3: Pulling from cloudfoundry/cnb
Digest: sha256:30af1eb2c8a6f38f42d7305acb721493cd58b7f203705dc03a3f4b21f8439ce0
Status: Image is up to date for cloudfoundry/cnb:cflinuxfs3
===> DETECTING
[detector] 6 of 15 buildpacks participating
[detector] paketo-buildpacks/bellsoft-liberica 2.5.0
[detector] paketo-buildpacks/maven             1.2.1

...

===> EXPORTING
[exporter] Adding layer 'launcher'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:class-counter'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:java-security-properties'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jre'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:jvmkill'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:link-local-dns'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:memory-calculator'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:openssl-security-provider'
[exporter] Adding layer 'paketo-buildpacks/bellsoft-liberica:security-providers-configurer'
[exporter] Adding layer 'paketo-buildpacks/executable-jar:class-path'
[exporter] Adding 1/1 app layer(s)
[exporter] Adding layer 'config'
[exporter] *** Images (sha256:097c7f67ac3dfc4e83d53c6b3e61ada8dd3d2c1baab2eb860945eba46814dba5):
[exporter]       index.docker.io/pasapples/msa-apifirst-paketo:latest
[exporter] Adding cache layer 'paketo-buildpacks/bellsoft-liberica:jdk'
[exporter] Adding cache layer 'paketo-buildpacks/maven:application'
[exporter] Adding cache layer 'paketo-buildpacks/maven:cache'
[exporter] Adding cache layer 'paketo-buildpacks/executable-jar:class-path'
Successfully built image pasapples/msa-apifirst-paketo:latest

Dockerhub showing pushed OCI compliant image


9. If you wanted to deploy your application to Kubernetes you could do that as follows.

$ kubectl create deployment msa-apifirst-paketo --image=pasapples/msa-apifirst-paketo
$ kubectl expose deployment msa-apifirst-paketo --type=LoadBalancer --port=8080

10. Finally you can select from 3 different builders as per below. We used the "base" builder in our example above
  • gcr.io/paketo-buildpacks/builder:full-cf
  • gcr.io/paketo-buildpacks/builder:base
  • gcr.io/paketo-buildpacks/builder:tiny

More Information

Paketo Buildpacks
https://paketo.io/
Categories: Fusion Middleware

Creating my first Tanzu Kubernetes Grid 1.0 workload cluster on AWS

Pas Apicella - Tue, 2020-05-05 04:15
With Tanzu Kubernetes Grid you can run the same K8s across data center, public cloud and edge for a consistent, secure experience for all development teams. To find out more here is step by step to get this working on AWS which is one of the first 2 supported IaaS, the other being vSphere.

Steps

Before we get started we need to download a few bits and pieces all described here.

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-set-up-tkg.html

Once you done that make sure you have tkg cli as follows

$ tkg version
Client:
Version: v1.0.0
Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

You will also need the following
  • kubectl is installed.
  • Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux.
  • Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS.
  • System time is synchronized with a Network Time Protocol (NTP) server
Once that is done follow this link for AWS pre-reqs and other downloads required

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws.html

1. Start by setting some AWS env variables for your account. Ensure you select a region supported by TKG which in my case I am using US regions

export AWS_ACCESS_KEY_ID=YYYY
export AWS_SECRET_ACCESS_KEY=ZZZZ
export AWS_REGION=us-east-1

2. Run the following clusterawsadm command to create a CloudFoundation stack.

$ ./clusterawsadm alpha bootstrap create-stack
Attempting to create CloudFormation stack cluster-api-provider-aws-sigs-k8s-io

Following resources are in the stack:

Resource                  |Type                                                                                |Status
AWS::IAM::Group           |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE
AWS::IAM::InstanceProfile |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/control-plane.cluster-api-provider-aws.sigs.k8s.io |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/nodes.cluster-api-provider-aws.sigs.k8s.io         |CREATE_COMPLETE
AWS::IAM::ManagedPolicy   |arn:aws:iam::667166452325:policy/controllers.cluster-api-provider-aws.sigs.k8s.io   |CREATE_COMPLETE
AWS::IAM::Role            |control-plane.cluster-api-provider-aws.sigs.k8s.io                                  |CREATE_COMPLETE
AWS::IAM::Role            |controllers.cluster-api-provider-aws.sigs.k8s.io                                    |CREATE_COMPLETE
AWS::IAM::Role            |nodes.cluster-api-provider-aws.sigs.k8s.io                                          |CREATE_COMPLETE
AWS::IAM::User            |bootstrapper.cluster-api-provider-aws.sigs.k8s.io                                   |CREATE_COMPLETE

On AWS console you should see the stack created as follows


3. Ensure SSH key pair exists in your region as shown below

$ aws ec2 describe-key-pairs --key-name us-east-key
{
    "KeyPairs": [
        {
            "KeyFingerprint": "71:44:e3:f9:0e:93:1f:e7:1e:c4:ba:58:e8:65:92:3e:dc:e6:27:42",
            "KeyName": "us-east-key"
        }
    ]
}

4. Set Your AWS Credentials as Environment Variables for Use by Cluster API

$ export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

$ export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

$ export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

$ export AWS_B64ENCODED_CREDENTIALS=$(./clusterawsadm alpha bootstrap encode-aws-credentials)

5. Set the correct AMI for your region.

List here: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/rn/VMware-Tanzu-Kubernetes-Grid-10-Release-Notes.html#amis

$ export AWS_AMI_ID=ami-0cdd7837e1fdd81f8

6. Deploy the Management Cluster to Amazon EC2 with the Installer Interface

$ tkg init --ui

Following the docs link below to fill in the desired details most of the defaults should work

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-install-tkg-aws-ui.html

Once complete:

$ ./tkg init --ui
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T091728980865562.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider aws:v0.5.2
Generating cluster configuration...
Setting up bootstrapper...
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Start creating management cluster...
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.2" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster pasaws-tkg-man-cluster as 'pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]


In AWS console EC2 instances page you will see a few VM's that represent the management cluster as shown below


7. Show the management cluster as follows

$ tkg get management-cluster
+--------------------------+-----------------------------------------------------+
| MANAGEMENT CLUSTER NAME  | CONTEXT NAME                                        |
+--------------------------+-----------------------------------------------------+
| pasaws-tkg-man-cluster * | pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster |
+--------------------------+-----------------------------------------------------+

8. You

9. You can connect to the management cluster as follows to look at what is running

$ kubectl config use-context pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster
Switched to context "pasaws-tkg-man-cluster-admin@pasaws-tkg-man-cluster".

10. Deploy a Dev cluster with Multiple Worker Nodes as shown below. This should take about 10 minutes or so.

$ tkg create cluster apples-aws-tkg --plan=dev --worker-machine-count 2
Logs of the command execution can also be found at: /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tkg-20200429T101702293042678.log
Creating workload cluster 'apples-aws-tkg'...

Context set for workload cluster apples-aws-tkg as apples-aws-tkg-admin@apples-aws-tkg

Waiting for cluster nodes to be available...

Workload cluster 'apples-aws-tkg' created

In AWS console EC2 instances page you will see a few more VM's that represent our new TKG workload cluster


11. View what workload clusters are under management and have been created

$ tkg get clusters
+----------------+-------------+
| NAME           | STATUS      |
+----------------+-------------+
| apples-aws-tkg | Provisioned |
+----------------+-------------+

12. To connect to the workload cluster we just created use a set of commands as follows

$ tkg get credentials apples-aws-tkg
Credentials of workload cluster apples-aws-tkg have been saved
You can now access the cluster by switching the context to apples-aws-tkg-admin@apples-aws-tkg under /Users/papicella/.kube/config

$ kubectl config use-context apples-aws-tkg-admin@apples-aws-tkg
Switched to context "apples-aws-tkg-admin@apples-aws-tkg".

$ kubectl cluster-info
Kubernetes master is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443
KubeDNS is running at https://apples-aws-tkg-apiserver-2050013369.us-east-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

The following link will also be helpful
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-tanzu-k8s-clusters-connect.html

18. View your cluster nodes as shown below
  
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-12.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2
ip-10-0-0-143.ec2.internal Ready master 6h25m v1.17.3+vmware.2
ip-10-0-0-63.ec2.internal Ready <none> 6h24m v1.17.3+vmware.2

Now your ready to deploy workloads into your TKG workload cluster and or create as many clusters as you need. For more information use the links below.


More Information

VMware Tanzu Kubernetes Grid
https://tanzu.vmware.com/kubernetes-grid

VMware Tanzu Kubernetes Grid 1.0 Documentation
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html


Categories: Fusion Middleware

Running Oracle 18c on a vSphere 7 using a Tanzu Kubernetes Grid Cluster

Pas Apicella - Sun, 2020-05-03 20:53
Previously I blogged about how to run stateful MySQL pod on vSphere 7 with Kubernetes. In this blog post we will do the same with Oracle Database Single Instance.

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes
http://theblasfrompas.blogspot.com/2020/04/creating-single-instance-stateful-mysql.html

For this blog we will use an Oracle single instance database version as follows [Oracle Database 18c (18.4.0) Express Edition (XE)], but could use any of the following if we wanted to. For a demo Oracle XE is all I need.
  • Oracle Database 19c (19.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 18c (18.4.0) Express Edition (XE)
  • Oracle Database 18c (18.3.0) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 2 (12.2.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 12c Release 1 (12.1.0.2) Enterprise Edition and Standard Edition 2
  • Oracle Database 11g Release 2 (11.2.0.2) Express Edition (XE)
Steps

1. First head to the following GitHub URL which contains sample Docker build files to facilitate installation, configuration, and environment setup for DevOps users. Clone it as shown below

$ git clone https://github.com/oracle/docker-images.git

2. Change to the directory as follows.

$ cd oracle/docker-images/OracleDatabase/SingleInstance/dockerfiles

3. Now ensure you have a local Docker Daemon running in my case I using Docker Desktop for Mac OSX. With that running let's build our docker image locally as shown below for the database [Oracle Database 18c (18.4.0) Express Edition (XE)]

$ ./buildDockerImage.sh -v 18.4.0 -x

....

.

  Oracle Database Docker Image for 'xe' version 18.4.0 is ready to be extended:

    --> oracle/database:18.4.0-xe

  Build completed in 1421 seconds.

4. View the image locally using "docker images"
  
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
oracle/database 18.4.0-xe 3ec5d050b739 5 minutes ago 5.86GB
oraclelinux 7-slim f23503228fa1 2 weeks ago 120MB

5. Not really interested in running Oracle locally so let's push the built image to a Container Registry. In this case I am using Dockerhub

$ docker tag oracle/database:18.4.0-xe pasapples/oracle18.4.0-xe
$ docker push pasapples/oracle18.4.0-xe
The push refers to repository [docker.io/pasapples/oracle18.4.0-xe]
5bf989482a54: Pushed
899f9c386f90: Pushed
bc198e3a2f79: Mounted from library/oraclelinux
latest: digest: sha256:0dbbb906b20e8b052a5d11744a25e75edff07231980b7e110f45387e4956600a size: 951

Once done here in the image on Dockerhub



6. At this point we ready to deploy our Oracle Database 18c (18.4.0) Express Edition (XE). To do that we will use a Tanzu Kubernetes Grid cluster on vSphere 7. For an example of how that was created visit this blog post below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

We will be using a cluster called "tkg-cluster-1" as shown in vSphere client image below.


7. Ensure we have switched to the correct context here as shown below.

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

8. Now let's create a PVC for our Oracle database. Ensure you use a storage class name you have previously setup in my case thats "pacific-gold-storage-policy". You don't really need 80G for a demo with Oracle XE but given I had 2TB of storage I set it to be quite high.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: oracle-pv-claim
  annotations:
    pv.beta.kubernetes.io/gid: "54321"
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi

$ kubectl create -f oracle-pvc.yaml
persistentvolumeclaim/oracle-pv-claim created

$ kubectl describe pvc oracle-pv-claim
Name:          oracle-pv-claim
Namespace:     default
StorageClass:  pacific-gold-storage-policy
Status:        Bound
Volume:        pvc-385ee541-5f7b-4a10-95de-f8b35a24306f
Labels:       
Annotations:   pv.beta.kubernetes.io/gid: 54321
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      80Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:   
Events:
  Type    Reason                Age   From                                                                                                 Message
  ----    ------                ----  ----                                                                                                 -------
  Normal  ExternalProvisioning  49s   persistentvolume-controller                                                                          waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
  Normal  Provisioning          49s   csi.vsphere.vmware.com_vsphere-csi-controller-8446748d4d-qbjhn_acc32eab-845a-11ea-a597-baf3d8b74e48  External provisioner is provisioning volume for claim "default/oracle-pv-claim"

9. Now we are ready to create a Deployment YAML as shown below. Few things to note here as per the YAML below
  1. I am hard coding the password but normally I would use a k8s Secret to do this
  2. I needed to create a init-container which fixed a file system permission issue for me 
  3. I am running as the root user as per "runAsUser: 0" again for some reason the installation would not start if it didn't have root privileges
  4. I am using the PVC we created above "oracle-pv-claim"
  5. I want to expose port 1521 (database listener port) and 5500 (enterprise manager port) internally only for now as per the Service definition. 
Deployment YAML:


apiVersion: v1
kind: Service
metadata:
  name: oracle
spec:
  ports:
  - port: 1521
    name: dblistport
  - port: 5500
    name: emport
  selector:
    app: oracle
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: oracle
spec:
  selector:
    matchLabels:
      app: oracle
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: oracle
    spec:
      containers:
      - image: pasapples/oracle18.4.0-xe
        name: oracle
        env:
          # Use secret in real usage
        - name: ORACLE_PWD
          value: welcome1
        - name: ORACLE_CHARACTERSET
          value: AL32UTF8
        ports:
        - containerPort: 1521
          name: dblistport
        - containerPort: 5500
          name: emport
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
        securityContext:
          runAsUser: 0
          runAsGroup: 54321
      initContainers:
      - name: fix-volume-permission
        image: busybox
        command:
        - sh
        - -c
        - chown -R 54321:54321 /opt/oracle/oradata && chmod 777 /opt/oracle/oradata
        volumeMounts:
        - name: oracle-persistent-storage
          mountPath: /opt/oracle/oradata
      volumes:
      - name: oracle-persistent-storage
        persistentVolumeClaim:
          claimName: oracle-pv-claim

10. Apply the YAML as shown below

$ kubectl create -f oracle-deployment.yaml
service/oracle created
deployment.apps/oracle created

11. Wait for the oracle pod to be in a running state as shown below, this should happen rarely quickly
  
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-574b87c764-2zrp2 1/1 Running 0 11d
nginx-574b87c764-p8d45 1/1 Running 0 11d
oracle-77f6f7d567-sfd67 1/1 Running 0 36s

12. You can now monitor the pod as it starts to create the database instance for us using the "kubectl logs" command as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
....

13. This will take some time but eventually it will have created / started the database instance for us as shown below

$ kubectl logs oracle-77f6f7d567-sfd67 -f
ORACLE PASSWORD FOR SYS AND SYSTEM: welcome1
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password:
**********
Enter SYSTEM user password:
********
Enter PDBADMIN User Password:
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: oracle-77f6f7d567-sfd67/XEPDB1
     Multitenant container database: oracle-77f6f7d567-sfd67
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
Pluggable database XEPDB1 opened read write
Completed: alter pluggable database XEPDB1 open
2020-05-04T00:59:32.719571+00:00
XEPDB1(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING  DATAFILE  '/opt/oracle/oradata/XE/XEPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT  1280K MAXSIZE UNLIMITED  EXTENT MANAGEMENT LOCAL  SEGMENT SPACE MANAGEMENT  AUTO
XEPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS"
XEPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
2020-05-04T00:59:37.043341+00:00
ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE
Completed: ALTER PLUGGABLE DATABASE XEPDB1 SAVE STATE

14. The easiest way to test out our database instance is to "exec" into the pod and use SQLPlus as shown below

- Create a script as follows

export POD_NAME=`kubectl get pod -l app=oracle -o jsonpath="{.items[0].metadata.name}"`
kubectl exec -it $POD_NAME -- /bin/bash

- Execute the script to exec into the pod

$ ./exec-oracle-pod.sh
bash-4.2#

15. Now lets connect one of two ways given we have a Pluggable database instance also running
  
bash-4.2# sqlplus system/welcome1@XE

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:02:38 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.


Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL> exit
Disconnected from Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
bash-4.2# sqlplus system/welcome1@XEPDB1

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 4 01:03:20 2020
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 01:02:38 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>

18. Now let's  connect externally to the database which to do I could create a port forward of the Oracle database listener port as shown below. I have setup Oracle instant client using the URL as follows https://www.oracle.com/database/technologies/instant-client/macos-intel-x86-downloads.html

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 1521
Forwarding from 127.0.0.1:1521 -> 1521
Forwarding from [::1]:1521 -> 1521

Now login using SQLPlus directly from my Mac OSX terminal window

  
$ sqlplus system/welcome1@//localhost:1521/XEPDB1

SQL*Plus: Release 19.0.0.0.0 - Production on Mon May 4 11:43:05 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon May 04 2020 11:39:46 +10:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL>


17. We could also use Oracle Enterprise Manager which we would do as follows. We could create a k8s Service of type LoadBalancer as well but for now let's just do a simple port forward as per above

$ kubectl port-forward --namespace default oracle-77f6f7d567-sfd67 5500
Forwarding from 127.0.0.1:5500 -> 5500
Forwarding from [::1]:5500 -> 5500

18. Access Oracle Enterprise Manager as follows, ensuring you have Flash installed in your browser. I logged in using the "SYS" user as "SYSDBA"

https://localhost:5500/em

Once logged in:






And that's it you have Oracle 18c / Oracle Enterprise manager running on vSphere 7 with Kubernetes and can now start deploying some applications that use that Oracle instance as required.


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html


Categories: Fusion Middleware

Creating a Single instance stateful MySQL pod on vSphere 7 with Kubernetes

Pas Apicella - Mon, 2020-04-27 20:32
In the vSphere environment, the persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. After the vSphere administrator creates a storage policy, for example gold, and assigns it to a namespace in a Supervisor Cluster, the storage policy appears as a matching Kubernetes storage class in the Supervisor Namespace and any available Tanzu Kubernetes clusters.

In this example below we will show how to get a Single instance Stateful MySQL application pod on vSphere 7 with Kubernetes. For an introduction to vSphere 7 with Kubernetes see this blog link below.

A first look a running a Kubernetes cluster on "vSphere 7 with Kubernetes"
http://theblasfrompas.blogspot.com/2020/04/a-first-look-running-kubenetes-cluster.html

Steps 

1. If you followed the Blog above you will have a Namespace as shown in the image below. The namespace we are using is called "ns1"



2. Click on "ns1" and ensure you have added storage using the "Storage" card



3. Now let's connect to our supervisor cluster and switch to the Namespace "ns1"

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

4. At this point we need to switch to the Namespace we created at step 2 which is "ns1".

$ kubectl config use-context ns1
Switched to context "ns1".

5. Use one of the following commands to verify that the storage class is the one which we added to the Namespace as per #2 above, in this case "pacific-gold-storage-policy".
  
$ kubectl get storageclass
NAME PROVISIONER AGE
pacific-gold-storage-policy csi.vsphere.vmware.com 5d20h

$ kubectl describe namespace ns1
Name: ns1
Labels: vSphereClusterID=domain-c8
Annotations: ncp/extpoolid: domain-c8:1d3e6bfb-af68-4494-a9bf-c8560a7a6aef-ippool-10-193-191-129-10-193-191-190
ncp/snat_ip: 10.193.191.141
ncp/subnet-0: 10.244.0.240/28
ncp/subnet-1: 10.244.1.16/28
vmware-system-resource-pool: resgroup-67
vmware-system-vm-folder: group-v68
Status: Active

Resource Quotas
Name: ns1-storagequota
Resource Used Hard
-------- --- ---
pacific-gold-storage-policy.storageclass.storage.k8s.io/requests.storage 20Gi 9223372036854775807

No resource limits.

As a DevOps engineer, you can use the storage class in your persistent volume claim specifications. You can then deploy an application that uses storage from the persistent volume claim.

6. At this point we can create a Persistent Volume Claim using YAML as follows. In the example below we reference storage class name ""pacific-gold-storage-policy".

Note: We are using a Supervisor Cluster Namespace here for our Stateful MySQL application but the storage class name will also appear in any Tanzu Kubernetes clusters you have created.

Example:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: pacific-gold-storage-policy
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/mysql-pv-claim created

7. Let's view the PVC we just created
  
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-a60f2787-ccf4-4142-8bf5-14082ae33403 20Gi RWO pacific-gold-storage-policy 39s

8. Now let's create a Deployment that will mount this PVC we created above using the name "mysql-pv-claim"

Example:

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

$ kubectl apply -f mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created

9. Let's verify we have a running Deployment with a MySQL POD as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-c85f7f79c-gskkr 1/1 Running 0 78s
pod/nginx 1/1 Running 0 3d21h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql ClusterIP None <none> 3306/TCP 79s
service/tkg-cluster-1-60657ac113b7b5a0ebaab LoadBalancer 10.96.0.253 10.193.191.68 80:32078/TCP 5d19h
service/tkg-cluster-1-control-plane-service LoadBalancer 10.96.0.222 10.193.191.66 6443:30659/TCP 5d19h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 79s

NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-c85f7f79c 1 1 1 79s

10. If we return to vSphere client we will see our MySQL Stateful deployment as shown below


11. We can also view the PVC we have created in vSphere client as well



12. Finally let's connect to the MySQL database which is done as follows by

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.47 MySQL Community Server (GPL)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| performance_schema  |
+---------------------+
4 rows in set (0.02 sec)

mysql>


More Information

Deploy a Stateful Application
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-D875DED3-41A1-484F-A1CD-13810D674420.html

Display Storage Classes in a Supervisor Namespace or Tanzu Kubernetes Cluster
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52.html#GUID-883E60F9-03C5-40D7-9AB8-BE42835B7B52
Categories: Fusion Middleware

A first look a running a Kubenetes cluster on "vSphere 7 with Kubernetes"

Pas Apicella - Wed, 2020-04-22 19:40
VMware recently announced the general availability of vSphere 7. Among many new features is the integration of Kubernetes into vSphere. In this blog post we will see what is required to create our first Kubernetes Guest cluster and deploy the simplest of workloads.



Steps

1. Log into the vCenter client and select "Menu -> Workload Management" and click on "Enable"

Full details on how to enable and setup the Supervisor Cluster can be found at the following docs

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-21ABC792-0A23-40EF-8D37-0367B483585E.html

Make sure you enable Harbor as the Registry using this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html

A pre-requisite for Workload Management is to have NSX-T 3.0 installed / enabled. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html

Once all done the "Workload Management" page will look like this. This can take around 30 minutes to complete



2. As a vSphere administrator, you can create namespaces on a Supervisor Cluster and configure them with resource quotas, storage, as well as set permissions for DevOps engineer users. Once you configure a namespace, you can provide it DevOps engineers, who run vSphere Pods and Kubernetes clusters created through the VMware Tanzu™ Kubernetes Grid™ Service.

To do this follow this link below

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-1544C9FE-0B23-434E-B823-C59EFC2F7309.html

Note: Make a note of this Namespace as we are going to need to connect to it shortly. In the examples below we have a namespace called "ns1"

3. With a vSphere namespace created we can now download the required CLI

Note: You can get the files from the Namespace summary page as shown below under the heading "Link to CLI Tools"



One downloaded put the contents of the .zip file in your OS's executable search path

4. Now we are ready to login. To do that we will use a command as follows

kubectl vsphere login --server=SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS 
--vsphere-username VCENTER-SSO-USER

Example:

$ kubectl vsphere login --insecure-skip-tls-verify --server wcp.haas-yyy.pez.pivotal.io -u administrator@vsphere.local

Password:
Logged in successfully.

You have access to the following contexts:
   ns1
   wcp.haas-253.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

Full instructions are at the following URL

https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-F5114388-1838-4B3B-8A8D-4AE17F33526A.html

5. At this point we need to switch to the Namespace we created at step 2 which is "ns1"

$ kubectl config use-context ns1
Switched to context "ns1".

6. Get a list of the available content images and the Kubernetes version that the image provides

Command: kubectl get virtualmachineimages
  
$ kubectl get virtualmachineimages
NAME AGE
ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd 35m

Version Information can be retrieved as follows:
  
$ kubectl describe virtualmachineimage ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Name: ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
Namespace:
Labels: <none>
Annotations: vmware-system.compatibilityoffering:
[{"requires": {"k8s.io/configmap": [{"predicate": {"operation": "anyOf", "arguments": [{"operation": "not", "arguments": [{"operation": "i...
vmware-system.guest.kubernetes.addons.calico:
{"type": "inline", "value": "---\n# Source: calico/templates/calico-config.yaml\n# This ConfigMap is used to configure a self-hosted Calic...
vmware-system.guest.kubernetes.addons.pvcsi:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: {{ .PVCSINamespace }}\n---\nkind: ServiceAccount\napiVers...
vmware-system.guest.kubernetes.addons.vmware-guest-cluster:
{"type": "inline", "value": "apiVersion: v1\nkind: Namespace\nmetadata:\n name: vmware-system-cloud-provider\n---\napiVersion: v1\nkind: ...
vmware-system.guest.kubernetes.distribution.image.version:
{"kubernetes": {"version": "1.16.8+vmware.1", "imageRepository": "vmware.io"}, "compatibility-7.0.0.10100": {"isCompatible": "true"}, "dis...
API Version: vmoperator.vmware.com/v1alpha1
Kind: VirtualMachineImage
Metadata:
Creation Timestamp: 2020-04-22T04:52:42Z
Generation: 1
Resource Version: 28324
Self Link: /apis/vmoperator.vmware.com/v1alpha1/virtualmachineimages/ob-15957779-photon-3-k8s-v1.16.8---vmware.1-tkg.3.60d2ffd
UID: 9b2a8248-d315-4b50-806f-f135459801a8
Spec:
Image Source Type: Content Library
Type: ovf
Events: <none>


7. Create a YAML file with the required configuration parameters to define the cluster

Few things to note:
  1. Make sure your storageClass name matches the storage class name you used during setup
  2. Make sure your distribution version matches a name from the output of step 6
Example:

apiVersion: run.tanzu.vmware.com/v1alpha1               #TKG API endpoint
kind: TanzuKubernetesCluster                            #required parameter
metadata:
  name: tkg-cluster-1                                   #cluster name, user defined
  namespace: ns1                                        #supervisor namespace
spec:
  distribution:
    version: v1.16                                      #resolved kubernetes version
  topology:
    controlPlane:
      count: 1                                          #number of control plane nodes
      class: best-effort-small                          #vmclass for control plane nodes
      storageClass: pacific-gold-storage-policy         #storageclass for control plane
    workers:
      count: 3                                          #number of worker nodes
      class: best-effort-small                          #vmclass for worker nodes
      storageClass: pacific-gold-storage-policy         #storageclass for worker nodes

More information on what the goes into your YAML is defined here

Configuration Parameters for Provisioning Tanzu Kubernetes Clusters
https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html

8. Provision the Tanzu Kubernetes cluster using the following kubectl command against the manifest file above

Command: kubectl apply -f CLUSTER-NAME.yaml

While creating you can check the status as follows

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 15m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm provisioning
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c provisioning

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 14m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 6m3s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 6m4s

9. Run the following command and make sure the Tanzu Kubernetes cluster is running, this may take some time.

Command: kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
  
$ kubectl get TanzuKubernetesCluster,clusters.cluster.x-k8s.io,machine.cluster.x-k8s.io,virtualmachines
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tanzukubernetescluster.run.tanzu.vmware.com/tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 18m running

NAME PHASE
cluster.cluster.x-k8s.io/tkg-cluster-1 provisioned

NAME PROVIDERID PHASE
machine.cluster.x-k8s.io/tkg-cluster-1-control-plane-4jmn7 vsphere://420c7807-d2f2-0461-8232-ec33e07632fa running
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp vsphere://420ca6ec-9793-7f23-2cd9-67b46c4cc49d provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm vsphere://420c9dd0-4fee-deb1-5673-dabc52b822ca provisioned
machine.cluster.x-k8s.io/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c vsphere://420cf11f-24e4-83dd-be10-7c87e5486f1c provisioned

NAME AGE
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-control-plane-4jmn7 18m
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-2qznp 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-m5xnm 9m58s
virtualmachine.vmoperator.vmware.com/tkg-cluster-1-workers-z5d2x-cc45bbd76-vv26c 9m59s

10. For a more concise view of what Tanzu Kubernetes Cluster you have this command with it's status is useful enough

Command: kubectl get tanzukubernetescluster
  
$ kubectl get tanzukubernetescluster
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE
tkg-cluster-1 1 3 v1.16.8+vmware.1-tkg.3.60d2ffd 20m running

11. Now let's login to a Tanzu Kubernetes Cluster using it's name as follows

kubectl vsphere login --tanzu-kubernetes-cluster-name TKG-CLUSTER-NAME --vsphere-username VCENTER-SSO-USER --server SUPERVISOR-CLUSTER-CONTROL-PLANE-IP-ADDRESS --insecure-skip-tls-verify

Example:

$ kubectl vsphere login --tanzu-kubernetes-cluster-name tkg-cluster-1 --vsphere-username administrator@vsphere.local --server wcp.haas-yyy.pez.pivotal.io --insecure-skip-tls-verify

Password:

Logged in successfully.

You have access to the following contexts:
   ns1
   tkg-cluster-1
   wcp.haas-yyy.pez.pivotal.io

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context `

12. Let's switch to the correct context here which is our newly created Kubernetes cluster

$ kubectl config use-context tkg-cluster-1
Switched to context "tkg-cluster-1".

13. If your applications fail to run with the error “container has runAsNonRoot and the image will run as root”, add the RBAC cluster roles from here:

https://github.com/dstamen/Kubernetes/blob/master/demo-applications/allow-runasnonroot-clusterrole.yaml

PSP (Pod Security Policy) is enabled by default in the Tanzu Kubernetes Clusters so a PSP policy needs to be applied prior to dropping a deployment on the cluster as shown above in the link

14. Now lets deploy a simple nginx deployment using the YAML file

apiVersion: v1
kind: Service
metadata:
  labels:
    name: nginx
  name: nginx
spec:
  ports:
    - port: 80
  selector:
    app: nginx
  type: LoadBalancer

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

15. Apply the YAML config to create the Deployment

$ kubectl create -f nginx-deployment.yaml
service/nginx created
deployment.apps/nginx created

16. Verify everything was deployed successfully as shown below
  
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-574b87c764-2zrp2 1/1 Running 0 74s
pod/nginx-574b87c764-p8d45 1/1 Running 0 74s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
service/nginx LoadBalancer 10.111.0.106 10.193.191.68 80:31921/TCP 75s
service/supervisor ClusterIP None <none> 6443/TCP 29m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 75s

NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-574b87c764 2 2 2 75s

To access NGINX use the the external IP address of the service "service/nginx" on port 80



17. Finally lets return to vSphere client and see where our Tanzu Kubernetes Cluster we created exists. It will be inside the vSphere namespace "ns1" which os where we drove our install of the Tanzu Kubernetes Cluster from.





More Information

Introducing vSphere 7: Modern Applications & Kubernetes
https://blogs.vmware.com/vsphere/2020/03/vsphere-7-kubernetes-tanzu.html

How to Get vSphere with Kubernetes
https://blogs.vmware.com/vsphere/2020/04/how-to-get-vsphere-with-kubernetes.html

vSphere with Kubernetes 101 Whitepaper
https://blogs.vmware.com/vsphere/2020/03/vsphere-with-kubernetes-101.html



Categories: Fusion Middleware

Ever wondered if Cloud Foundry can run on Kubernetes?

Pas Apicella - Wed, 2020-04-15 23:36
Well yep it's possible now and is available to be tested now as per the repo below. In this post we will show what we can do with cf-for-k8s as it stands now, once installed and some requirements on how to install it.

https://github.com/cloudfoundry/cf-for-k8s

Before we get started it's important to note, this taken directly from the GitHub repo itself.

"This is a highly experimental project to deploy the new CF Kubernetes-centric components on Kubernetes. It is not meant for use in production and is subject to change in the future"

Steps

1. First we need a k8s cluster. I am using k8s on vSphere using VMware Enterprise PKS but you can use GKE or any other cluster that supports the minimum requirements.

To deploy cf-for-k8s as is, the cluster should:
  • be running version 1.14.x, 1.15.x, or 1.16.x
  • have a minimum of 5 nodes
  • have a minimum of 3 CPU, 7.5GB memory per node
2. There are also some IaaS requirements as shown below.



  • Supports LoadBalancer services
  • Defines a default StorageClass 


  • 3. Finally requirements for pushing source-code based apps to Cloud Foundry means we need a OCI compliant registry. I am using GCR but Docker Hub also works.

    Under the hood, cf-for-k8s uses Cloud Native buildpacks to detect and build the app source code into an oci compliant image and pushes the app image to the registry. Though cf-for-k8s has been tested with Google Container Registry and Dockerhub.com, it should work for any external OCI compliant registry.

    So if you like me and using GCR and following along you will need to create an IAM account with storage privileges for GCR. Assuming you want to create a new IAM account on GCP follow these steps ensuring you set your GCP project id as shown below

    $ export GCP_PROJECT_ID={project-id-in-gcp}

    $ gcloud iam service-accounts create push-image

    $ gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \
        --member serviceAccount:push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com \
        --role roles/storage.admin

    $ gcloud iam service-accounts keys create \

      --iam-account "push-image@$GCP_PROJECT_ID.iam.gserviceaccount.com" \
      gcr-storage-admin.json

    4. So to install cf-for-k8s we simply follow the detailed steps below.

    https://github.com/cloudfoundry/cf-for-k8s/blob/master/docs/deploy.md

    Note: We are using GCR so the generate values script we run looks as follows which injects our GCR IAM account keys into the YML file if we performed the step above 

    $ ./hack/generate-values.sh -d DOMAIN -g ./gcr-push-storage-admin.json > /tmp/cf-values.yml

    5. So in about 8 minutes or so you should have Cloud Foundry running on your Kubernetes cluster. Let's run a series of commands to verify that.

    - Here we see a set of Cloud Foundry namespaces named "cf-{name}"
      
    $ kubectl get ns
    NAME STATUS AGE
    cf-blobstore Active 8d
    cf-db Active 8d
    cf-system Active 8d
    cf-workloads Active 8d
    cf-workloads-staging Active 8d
    console Active 122m
    default Active 47d
    istio-system Active 8d
    kpack Active 8d
    kube-node-lease Active 47d
    kube-public Active 47d
    kube-system Active 47d
    metacontroller Active 8d
    pks-system Active 47d
    vmware-system-tmc Active 12d

    - Let's check the Cloud Foundry system is up and running by inspecting the status of the PODS as shown below
      
    $ kubectl get pods -n cf-system
    NAME READY STATUS RESTARTS AGE
    capi-api-server-6d89f44d5b-krsck 5/5 Running 2 8d
    capi-api-server-6d89f44d5b-pwv4b 5/5 Running 2 8d
    capi-clock-6c9f6bfd7-nmjrd 2/2 Running 0 8d
    capi-deployment-updater-79b4dc76-g2x6s 2/2 Running 0 8d
    capi-kpack-watcher-6c67984798-2x5n2 2/2 Running 0 8d
    capi-worker-7f8d499494-cd8fx 2/2 Running 0 8d
    cfroutesync-6fb9749-cbv6w 2/2 Running 0 8d
    eirini-6959464957-25ttx 2/2 Running 0 8d
    fluentd-4l9ml 2/2 Running 3 8d
    fluentd-mf8x6 2/2 Running 3 8d
    fluentd-smss9 2/2 Running 3 8d
    fluentd-vfzhl 2/2 Running 3 8d
    fluentd-vpn4c 2/2 Running 3 8d
    log-cache-559846dbc6-p85tk 5/5 Running 5 8d
    metric-proxy-76595fd7c-x9x5s 2/2 Running 0 8d
    uaa-79d77dbb77-gxss8 2/2 Running 2 8d

    - Lets view the ingress gateway resources in the namespace "
      
    $ kubectl get all -n istio-system
    NAME READY STATUS RESTARTS AGE
    pod/istio-citadel-bc7957fc4-nn8kx 1/1 Running 0 8d
    pod/istio-galley-6478b6947d-6dl9h 2/2 Running 0 8d
    pod/istio-ingressgateway-fcgvg 2/2 Running 0 8d
    pod/istio-ingressgateway-jzkpj 2/2 Running 0 8d
    pod/istio-ingressgateway-ptjzz 2/2 Running 0 8d
    pod/istio-ingressgateway-rtwk4 2/2 Running 0 8d
    pod/istio-ingressgateway-tvz8p 2/2 Running 0 8d
    pod/istio-pilot-67955bdf6f-nrhzp 2/2 Running 0 8d
    pod/istio-policy-6b786c6f65-m7tj5 2/2 Running 3 8d
    pod/istio-sidecar-injector-5669cc5894-tq55v 1/1 Running 0 8d
    pod/istio-telemetry-77b745cd6b-wn2dx 2/2 Running 3 8d

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/istio-citadel ClusterIP 10.100.200.216 <none> 8060/TCP,15014/TCP 8d
    service/istio-galley ClusterIP 10.100.200.214 <none> 443/TCP,15014/TCP,9901/TCP,15019/TCP 8d
    service/istio-ingressgateway LoadBalancer 10.100.200.105 10.195.93.142 15020:31515/TCP,80:31666/TCP,443:30812/TCP,15029:31219/TCP,15030:31566/TCP,15031:30615/TCP,15032:30206/TCP,15443:32555/TCP 8d
    service/istio-pilot ClusterIP 10.100.200.182 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 8d
    service/istio-policy ClusterIP 10.100.200.98 <none> 9091/TCP,15004/TCP,15014/TCP 8d
    service/istio-sidecar-injector ClusterIP 10.100.200.160 <none> 443/TCP 8d
    service/istio-telemetry ClusterIP 10.100.200.5 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 8d

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/istio-ingressgateway 5 5 5 5 5 <none> 8d

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/istio-citadel 1/1 1 1 8d
    deployment.apps/istio-galley 1/1 1 1 8d
    deployment.apps/istio-pilot 1/1 1 1 8d
    deployment.apps/istio-policy 1/1 1 1 8d
    deployment.apps/istio-sidecar-injector 1/1 1 1 8d
    deployment.apps/istio-telemetry 1/1 1 1 8d

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/istio-citadel-bc7957fc4 1 1 1 8d
    replicaset.apps/istio-galley-6478b6947d 1 1 1 8d
    replicaset.apps/istio-pilot-67955bdf6f 1 1 1 8d
    replicaset.apps/istio-policy-6b786c6f65 1 1 1 8d
    replicaset.apps/istio-sidecar-injector-5669cc5894 1 1 1 8d
    replicaset.apps/istio-telemetry-77b745cd6b 1 1 1 8d

    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    horizontalpodautoscaler.autoscaling/istio-pilot Deployment/istio-pilot 0%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-policy Deployment/istio-policy 2%/80% 1 5 1 8d
    horizontalpodautoscaler.autoscaling/istio-telemetry Deployment/istio-telemetry 7%/80% 1 5 1 8d

    You can use kapp to verify your install as follows:

    $ kapp list
    Target cluster 'https://cfk8s.mydomain:8443' (nodes: 46431ba8-2048-41ea-a5c9-84c3a3716f6e, 4+)

    Apps in namespace 'default'

    Name  Label                                 Namespaces                                                                                                  Lcs   Lca
    cf    kapp.k14s.io/app=1586305498771951000  (cluster),cf-blobstore,cf-db,cf-system,cf-workloads,cf-workloads-staging,istio-system,kpack,metacontroller  true  8d

    Lcs: Last Change Successful
    Lca: Last Change Age

    1 apps

    Succeeded

    6. Now Cloud Foundry is running we need to configure DNS on your IaaS provider to point the wildcard subdomain of your system domain and the wildcard subdomain of all apps domains to point to external IP of the Istio Ingress Gateway service. You can retrieve the external IP of this service by running a command as follows

    $ kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

    Note: The DNS A record wildcard entry would look as follows ensuring you use the DOMAIN you told the install script you were using

    DNS entry should be mapped to : *.{DOMAIN}

    7. Once done we can use DIG to verify we have setup our DNS wildcard entry correct. We looking for a ANSWER section which maps to the IP address we got from

    $ dig api.mydomain

    ; <<>> DiG 9.10.6 <<>> api.mydomain
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- 58127="" font="" id:="" noerror="" opcode:="" query="" status:="">
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;api.mydomain. IN A

    ;; ANSWER SECTION:
    api.mydomain. 60 IN A 10.0.0.1

    ;; Query time: 216 msec
    ;; SERVER: 10.10.6.6#53(10.10.6.7)
    ;; WHEN: Thu Apr 16 11:46:59 AEST 2020
    ;; MSG SIZE  rcvd: 83

    8. So now we are ready to login using Cloud Foundry CLI. Make sure your using the latest version as shown below

    $ cf version
    cf version 6.50.0+4f0c3a2ce.2020-03-03

    Note: You can install Cloud Foundry CLI as follows

    https://github.com/cloudfoundry/cli

    9. Ok so we are ready to target the API endpoint and login. As you may as guessed the API endpoint is "api.{DOMNAIN" so go ahead and do that as shown below. If this fails it means you have to re-visit steps 6 and 7 above.

    $ cf api https://api.mydomain --skip-ssl-validation
    Setting api endpoint to https://api.mydomain...
    OK

    api endpoint:   https://api.mydomain
    api version:    2.148.0

    10. So now we need the admin password to login using UAA and this was generated for us when we run the generate script above and produced our install YML. You can run a simple command as follows using the YML file to get the password.

    $ head cf-values.yml
    #@data/values
    ---
    system_domain: "mydomain"
    app_domains:
    #@overlay/append
    - "mydomain"
    cf_admin_password: 5nxm5bnl23jf5f0aivbs

    cf_blobstore:
      secret_key: 04gihynpr0x4dpptc5a5

    11. So to login I use a script as follows which will create a space for me which I then target to applications into.

    cf auth admin 5nxm5bnl23jf5f0aivbs
    cf target -o system
    cf create-space development
    cf target -s development

    Output when we run this script or just type each command one at a time will look as follows.

    API endpoint: https://api.mydomain
    Authenticating...
    OK

    Use 'cf target' to view or set your target org and space.
    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development
    Creating space development in org system as admin...
    OK

    Space development already exists

    api endpoint:   https://api.mydomain
    api version:    2.148.0
    user:           admin
    org:            system
    space:          development

    12. If we type in "cf apps" we will see we have no applications deployed which is expected.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    No apps found

    13. So lets deploy out first application. In this example we will use a NodeJS cloud foundry application which exists at the following GitHub repo. We will deploy it using it's source code only. To do that we will clone it onto our file system as shown below.

    https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    $ git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs

    14. Edit cf-sample-app-nodejs/manifest.yml to look as follows by removing radom-route entry

    ---
    applications:
    - name: cf-nodejs
      memory: 512M
      instances: 1

    15. Now to push the Node app we are going to use two terminal windows. One to actually push the app and the other to view the logs.


    16. Now in first terminal window issue this command ensuring the cloned app from above exists from the directory your in as shown by the path it's referencing

    $ cf push test-node-app -p ./cf-sample-app-nodejs

    17. In the second terminal window issue this command.

    $ cf logs test-node-app

    18. You should see log output while the application is being pushed.



    19. Wait for the "cf push" to complete as shown below

    ....

    Waiting for app to start...

    name:                test-node-app
    requested state:     started
    isolation segment:   placeholder
    routes:              test-node-app.system.run.haas-210.pez.pivotal.io
    last uploaded:       Thu 16 Apr 13:04:59 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T03:05:13Z   0.0%   0 of 1G   0 of 1G


    Verify we have deployed our Node app and it has a fully qualified URL for us to access it as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           1/1         1G       1G     test-node-app.mydomain

    ** Browser **



    Ok so what actually has happened on our k8s cluster to get this application deployed? There was a series of steps performed which is why "cf push" blocks until all these have happened. At a high level these are the 3 main steps
    1. Capi uploads the code, puts it in internal blob store
    2. kpack builds the image and stores in the registry you defined at install time (GCR for us)
    3. Eirini schedules the pod

    GCR "cf-workloads" folder


    kpack is where lots of magic actually occurs. kpack is based on the CNCF sandbox project knows as Cloud Native Buildpacks and can create OCI compliant images from source code and/or artifacts automatically for you. CNB/kpack doesn't just stop there to find out more I suggest going to the following links.

    https://tanzu.vmware.com/content/blog/introducing-kpack-a-kubernetes-native-container-build-service

    https://buildpacks.io/

    Buildpacks provide a higher-level abstraction for building apps compared to Dockerfiles.

    Specifically, buildpacks:
    • Provide a balance of control that reduces the operational burden on developers and supports enterprise operators who manage apps at scale.
    • Ensure that apps meet security and compliance requirements without developer intervention.
    • Provide automated delivery of both OS-level and application-level dependency upgrades, efficiently handling day-2 app operations that are often difficult to manage with Dockerfiles.
    • Rely on compatibility guarantees to safely apply patches without rebuilding artifacts and without unintentionally changing application behavior.
    20. Let's run a series of kubectl commands to see what was created. All of our apps get deployed to the namespace "cf-workloads".

    - What POD's are running in cf-workloads
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 26m

    - You will notice we have a POD running with 2 containers BUT also we have a Service which is used internally to route to the or more PODS using ClusterIP as shown below
      
    $ kubectl get svc -n cf-workloads
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 27m

    - Each POD has two containers named as follows.

    opi : This is your actual container instance running your code
    istio-proxy: This as the name suggests is a proxy container which among other things routes requests to the OPI container image when required

    21. Ok so let's scale our application to run 2 instances. To do that we simply use Cloud Foundry CLI as follows

    $ cf scale test-node-app -i 2
    Scaling app test-node-app in org system / space development as admin...
    OK

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    And using kubectl as expected we end up with another POD created for the second instance
      
    $ kubectl get pods -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    test-node-app-development-c346b24349-0 2/2 Running 0 44m
    test-node-app-development-c346b24349-1 2/2 Running 0 112s

    If we dig a bit deeper will see that a Statefulset backs the application deployment shown below
      
    $ kubectl get all -n cf-workloads
    NAME READY STATUS RESTARTS AGE
    pod/test-node-app-development-c346b24349-0 2/2 Running 0 53m
    pod/test-node-app-development-c346b24349-1 2/2 Running 0 10m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/s-1999c874-e300-45e1-b5ff-1a69b7649dd6 ClusterIP 10.100.200.26 <none> 8080/TCP 53m

    NAME READY AGE
    statefulset.apps/test-node-app-development-c346b24349 2/2 53m

    Ok so as you may have guessed we can deploy many different types of apps because kpack supports multiple languages including Java, Go, Python etc.

    22. Let's deploy a Go application as follows.

    $ git clone https://github.com/swisscom/cf-sample-app-go

    $ cf push my-go-app -m 64M -p ./cf-sample-app-go
    Pushing app my-go-app to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:       my-go-app
      path:       /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/cf-sample-app-go
    + memory:     64M
      routes:
    +   my-go-app.mydomain

    Creating app my-go-app...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.43 KiB / 1.43 KiB [====================================================================================] 100.00% 1s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                my-go-app
    requested state:     started
    isolation segment:   placeholder
    routes:              my-go-app.mydomain
    last uploaded:       Thu 16 Apr 14:06:25 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   64M
         state     since                  cpu    memory     disk      details
    #0   running   2020-04-16T04:06:43Z   0.0%   0 of 64M   0 of 1G

    We can invoke the application using "curl" or something more modern like "HTTPie"

    $ http http://my-go-app.mydomain
    HTTP/1.1 200 OK
    content-length: 59
    content-type: text/plain; charset=utf-8
    date: Thu, 16 Apr 2020 04:09:46 GMT
    server: istio-envoy
    x-envoy-upstream-service-time: 6

    Congratulations! Welcome to the Swisscom Application Cloud!

    If we tailed the logs using "cf logs my-go-app" we would of seen that kpack intelligently determine this is a GO app and uses the Go buildpack to compile the code and produce a container image.

    ...
    2020-04-16T14:05:27.52+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Warning: Image "gcr.io/fe-papicella/cf-workloads/f0072cfa-0e7e-41da-9bf7-d34b2997fb94" not found
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Compiler Buildpack 0.0.83
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go 1.13.7: Contributing to layer
    2020-04-16T14:05:29.59+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Downloading from https://buildpacks.cloudfoundry.org/dependencies/go/go-1.13.7-bionic-5bb47c26.tgz
    2020-04-16T14:05:35.13+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Verifying checksum
    2020-04-16T14:05:35.63+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT     Expanding to /layers/org.cloudfoundry.go-compiler/go
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Go Mod Buildpack 0.0.84
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT Setting environment variables
    2020-04-16T14:05:41.48+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    2020-04-16T14:05:41.68+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT github.com/swisscom/cf-sample-app-go
    2020-04-16T14:05:41.69+1000 [/809cbe09-cf3f-4601-aea5-8d93d18f17ce] OUT : Contributing to layer
    ...

    Using "cf apps" we now have two applications deployed as shown below.

    $ cf apps
    Getting apps in org system / space development as admin...
    OK

    name            requested state   instances   memory   disk   urls
    my-go-app       started           1/1         64M      1G     my-go-app.mydomain
    test-node-app   started           2/2         1G       1G     test-node-app.mydomain

    23. Finally kpack and the buildpacks eco system can deploy already created artifacts. The Java Buildpack is capable of not only deploying from source but can also use a FAT spring boot JAR file for example as shown below. In this example we have packaged the artifact we wish to deploy as "PivotalMySQLWeb-1.0.0-SNAPSHOT.jar".

    $ cf push piv-mysql-web -p PivotalMySQLWeb-1.0.0-SNAPSHOT.jar -i 1 -m 1g
    Pushing app piv-mysql-web to org system / space development as admin...
    Getting app info...
    Creating app with these attributes...
    + name:        piv-mysql-web
      path:        /Users/papicella/pivotal/PCF/APJ/PEZ-HaaS/haas-210/cf-for-k8s/artifacts/PivotalMySQLWeb-1.0.0-SNAPSHOT.jar
    + instances:   1
    + memory:      1G
      routes:
    +   piv-mysql-web.mydomain

    Creating app piv-mysql-web...
    Mapping routes...
    Comparing local files to remote cache...
    Packaging files to upload...
    Uploading files...
     1.03 MiB / 1.03 MiB [====================================================================================] 100.00% 2s

    Waiting for API to complete processing files...

    Staging app and tracing logs...

    Waiting for app to start...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory    disk      details
    #0   running   2020-04-16T04:17:43Z   0.0%   0 of 1G   0 of 1G


    Of course the usual commands you expect from CF CLI still exist. Here are some examples as follows.

    $ cf app piv-mysql-web
    Showing health and status for app piv-mysql-web in org system / space development as admin...

    name:                piv-mysql-web
    requested state:     started
    isolation segment:   placeholder
    routes:              piv-mysql-web.mydomain
    last uploaded:       Thu 16 Apr 14:17:22 AEST 2020
    stack:
    buildpacks:

    type:           web
    instances:      1/1
    memory usage:   1024M
         state     since                  cpu    memory         disk      details
    #0   running   2020-04-16T04:17:43Z   0.1%   195.8M of 1G   0 of 1G

    $ cf env piv-mysql-web
    Getting env variables for app piv-mysql-web in org system / space development as admin...
    OK

    System-Provided:

    {
     "VCAP_APPLICATION": {
      "application_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "application_name": "piv-mysql-web",
      "application_uris": [
       "piv-mysql-web.mydomain"
      ],
      "application_version": "750d9530-e756-4b74-ac86-75b61c60fe2d",
      "cf_api": "https://api. mydomain",
      "limits": {
       "disk": 1024,
       "fds": 16384,
       "mem": 1024
      },
      "name": "piv-mysql-web",
      "organization_id": "8ae94610-513c-435b-884f-86daf81229c8",
      "organization_name": "system",
      "process_id": "3b8bad84-2654-46f4-b32a-ebad0a4993c1",
      "process_type": "web",
      "space_id": "7f3d78ae-34d4-42e4-8ab8-b34e46e8ad1f",
      "space_name": "development",
      "uris": [
       "piv-mysql-web. mydomain"
      ],
      "users": null,
      "version": "750d9530-e756-4b74-ac86-75b61c60fe2d"
     }
    }

    No user-defined env variables have been set

    No running env variables have been set

    No staging env variables have been set

    So what about some sort of UI? That brings as to step 24

    24. Let's start by installing helm using a script as follows

    #!/usr/bin/env bash

    echo "install helm"
    # installs helm with bash commands for easier command line integration
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
    # add a service account within a namespace to segregate tiller
    kubectl --namespace kube-system create sa tiller
    # create a cluster role binding for tiller
    kubectl create clusterrolebinding tiller \
        --clusterrole cluster-admin \
        --serviceaccount=kube-system:tiller

    echo "initialize helm"
    # initialized helm within the tiller service account
    helm init --service-account tiller
    # updates the repos for Helm repo integration
    helm repo update

    echo "verify helm"
    # verify that helm is installed in the cluster
    kubectl get deploy,svc tiller-deploy -n kube-system

    Once installed you can verify helm is working by using "helm ls" which should come back with no output as you haven't installed anything with helm yet.

    25. Run the following to install Stratos an open source Web UI for Cloud Foundry

    For more information on Stratos visit this URL - https://github.com/cloudfoundry/stratos

    $ helm install stratos/console --namespace=console --name my-console --set console.service.type=LoadBalancer
    NAME:   my-console
    LAST DEPLOYED: Thu Apr 16 09:48:19 2020
    NAMESPACE: console
    STATUS: DEPLOYED

    RESOURCES:
    ==> v1/Deployment
    NAME        READY  UP-TO-DATE  AVAILABLE  AGE
    stratos-db  0/1    1           0          2s

    ==> v1/Job
    NAME                   COMPLETIONS  DURATION  AGE
    stratos-config-init-1  0/1          2s        2s

    ==> v1/PersistentVolumeClaim
    NAME                              STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
    console-mariadb                   Bound   pvc-4ff20e21-1852-445f-854f-894bc42227ce  1Gi       RWO           fast          2s
    my-console-encryption-key-volume  Bound   pvc-095bb7ed-7be9-4d93-b63a-a8af569361b6  20Mi      RWO           fast          2s

    ==> v1/Pod(related)
    NAME                         READY  STATUS             RESTARTS  AGE
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s
    stratos-config-init-1-2t47x  0/1    ContainerCreating  0         2s

    ==> v1/Role
    NAME              AGE
    config-init-role  2s

    ==> v1/RoleBinding
    NAME                              AGE
    config-init-secrets-role-binding  2s

    ==> v1/Secret
    NAME                  TYPE    DATA  AGE
    my-console-db-secret  Opaque  5     2s
    my-console-secret     Opaque  5     2s

    ==> v1/Service
    NAME                TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
    my-console-mariadb  ClusterIP     10.100.200.162           3306/TCP       2s
    my-console-ui-ext   LoadBalancer  10.100.200.171  10.195.93.143  443:31524/TCP  2s

    ==> v1/ServiceAccount
    NAME         SECRETS  AGE
    config-init  1        2s

    ==> v1/StatefulSet
    NAME     READY  AGE
    stratos  0/1    2s

    26. You can verify it installed a few ways as shown below.

    - Use helm with "helm ls"
      
    $ helm ls
    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    my-console 1 Thu Apr 16 09:48:19 2020 DEPLOYED console-3.0.0 3.0.0 console

    - Verify everything is running using "kubectl get all -n console"
      
    $ k get all -n console
    NAME READY STATUS RESTARTS AGE
    pod/stratos-0 0/2 ContainerCreating 0 40s
    pod/stratos-config-init-1-2t47x 0/1 Completed 0 40s
    pod/stratos-db-69ddf7f5f7-gb8xm 0/1 Running 0 40s

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/my-console-mariadb ClusterIP 10.100.200.162 <none> 3306/TCP 40s
    service/my-console-ui-ext LoadBalancer 10.100.200.171 10.195.1.1 443:31524/TCP 40s

    NAME READY UP-TO-DATE AVAILABLE AGE
    deployment.apps/stratos-db 0/1 1 0 41s

    NAME DESIRED CURRENT READY AGE
    replicaset.apps/stratos-db-69ddf7f5f7 1 1 0 41s

    NAME READY AGE
    statefulset.apps/stratos 0/1 41s

    NAME COMPLETIONS DURATION AGE
    job.batch/stratos-config-init-1 1/1 27s 42s

    27. Now to open up the UI web app we just need the external IP from "service/my-console-ui-ext" as per the output above.

    Navigate to https://{external-ip}:443

    28. Create a local user to login using the password you set and and the username "admin".

    Note: The password is just to get into the UI. It can be anything you want it to be.



    29. Now we need to click on "Endpoints" and register a Cloud Foundry endpoint using the same login details we used with the Cloud Foundry API earlier at step 11.

    Note: The API endpoint is what you used at step 9 and make sure to skip SSL validation

    Once connected there are our deployed applications.



    Summary 

    In this post we explored what running Cloud Foundry on Kubernetes looks like. For those familiar with Cloud Foundry or Tanzu Application Service (formally known as Pivotal Application Service) from a development perspective everything is the same using familiar CF CLI commands. What changes here is the footprint to run Cloud Foundry is much less complicated and runs on Kubernetes itself meaning even more places to run Cloud Foundry then ever before plus the ability to leverage community based projects on Kubernetes further more simplifying Cloud Foundry.

    For more information see the links below.

    More Information

    GitHub Repo
    https://github.com/cloudfoundry/cf-for-k8s

    VMware Tanzu Application Service for Kubernetes (Beta)
    https://network.pivotal.io/products/tas-for-kubernetes/
    Categories: Fusion Middleware

    Thank you kubie exactly what I needed

    Pas Apicella - Sun, 2020-04-05 22:59
    On average I deal with at least 5 different Kubernetes clusters so today when I saw / heard of kubie I had to install it.

    kubie is an alternative to kubectx, kubens and the k on prompt modification script. It offers context switching, namespace switching and prompt modification in a way that makes each shell independent from others

    Installing kubie right now involved download the release from the link below. Homebrew support is pending

    https://github.com/sbstp/kubie/releases

    Once added to your path it's as simple as this

    1. Check kubie is in your path

    $ which kubie
    /usr/local/bin/kubie

    2. Run "kubie ctx" as follows and select the "apples" k8s context

    papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie ctx



    [apples|default] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    3. Switch to a new namespace as shown below and watch how the PS1 prompt changes to indicate the k8s conext and new namespace we have set as result of the command below

    $ kubectl config set-context --current --namespace=vmware-system-tmc
    Context "apples" modified.

    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    4. Finally kubie exec is a subcommand that allows you to run commands inside of a context, a bit like kubectl exec allows you to run a command inside a pod. Here is some examples below
      
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples vmware-system-tmc kubectl get pods
    NAME READY STATUS RESTARTS AGE
    agent-updater-75f88b44f6-9f9jj 1/1 Running 0 2d23h
    agentupdater-workload-1586145240-kmwln 1/1 Running 0 3s
    cluster-health-extension-76d9b549b5-dlhms 1/1 Running 0 2d23h
    data-protection-59c88488bd-9wxk2 1/1 Running 0 2d23h
    extension-manager-8d69d95fd-sgksw 1/1 Running 0 2d23h
    extension-updater-77fdc4574d-fkcwb 1/1 Running 0 2d23h
    inspection-extension-64857d4d95-nl76f 1/1 Running 0 2d23h
    intent-agent-6794bb7995-jmcxg 1/1 Running 0 2d23h
    policy-sync-extension-7c968c9dcd-x4jvl 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-ppbn6 1/1 Running 0 2d23h
    policy-webhook-779c6f6c6-r82h4 1/1 Running 1 2d23h
    sync-agent-d67f95889-qbxtb 1/1 Running 6 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$ kubie exec apples default kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pbs-demo-image-build-1-mnh6v-build-pod 0/1 Completed 0 2d23h
    [apples|vmware-system-tmc] papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-236$

    More Information

    Blog Page:
    https://blog.sbstp.ca/introducing-kubie/

    GitHub Page:
    https://github.com/sbstp/kubie
    Categories: Fusion Middleware

    VMware enterprise PKS 1.7 has just been released

    Pas Apicella - Thu, 2020-04-02 22:51
    VMware enterprise PKS 1.7 was just released. For details please review the release notes using the link below.

    https://docs.pivotal.io/pks/1-7/release-notes.html



    More Information

    https://docs.pivotal.io/pks/1-7/index.html


    Categories: Fusion Middleware

    kpack 0.0.6 and Docker Hub secret annotation change for Docker Hub

    Pas Apicella - Mon, 2020-03-02 16:53
    I decided to try out the 0.0.6 release of kpack and noticed a small change to how you define your registry credentials when using Docker Hub. If you fail to do this it will fail to use Docker Hub as your registry with errors as follows when trying to export the image.

    [export] *** Images (sha256:1335a241ab0428043a89626c99ddac8dfb2719b79743652e535898600439e80f):
    [export]       pasapples/pbs-demo-image:latest - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export]       index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548 - UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]
    [export] ERROR: failed to export: failed to write image to the following tags: [pasapples/pbs-demo-image:latest: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]],[index.docker.io/pasapples/pbs-demo-image:b1.20200301.232548: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:push Class: Name:pasapples/pbs-demo-image Type:repository] map[Action:pull Class: Name:cloudfoundry/run Type:repository]]]

    Previously in kpack 0.0.5 you defined your Dockerhub registry as follows:

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: index.docker.io
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    Now with kpack 0.0.6 you need to define the "annotations" using an url with HTTPS and "/v1" appended to the end of the URL as shown below.

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dockerhub
      annotations:
        build.pivotal.io/docker: https://index.docker.io/v1/
    type: kubernetes.io/basic-auth
    stringData:
      username: dockerhub-user
      password: ...

    More Information

    https://github.com/pivotal/kpack
    Categories: Fusion Middleware

    Nice new look and feel to spring.io web site!!!!

    Pas Apicella - Sun, 2020-02-16 22:06
    Seen the new look and feel for the spring.io? Worth a look

    https://spring.io/



    Categories: Fusion Middleware

    Taking VMware Tanzu Mission Control for a test drive this time creating a k8s cluster on AWS

    Pas Apicella - Tue, 2020-02-11 04:12
    Previously I blogged about how to use VMware Tanzu Mission Control (TMC) to attach to kubernetes clusters and in that example we used a GCP GKE cluster. That blog entry exists here

    Taking VMware Tanzu Mission Control for a test drive
    http://theblasfrompas.blogspot.com/2020/02/taking-tanzu-mission-control-for-test.html

    In this example we will use the "Create Cluster" button to create a new k8s cluster on AWS that will be managed by TMC for it's entire lifecycle.

    Steps

    Note: Before getting started you need to create a "Cloud Provider Account" and that is done using AWS as shown below. You can create one or more connected cloud provider accounts. Adding accounts allows you to start using VMware TMC to create clusters, add data protection, and much more



    1. Click on the "Clusters" on the left hand navigation bar

    2. In the right hand corner click the button "New Cluster" and select your cloud provider account on AWS as shown below


    3. Fill in the details of your new cluster as shown below ensuring you select the correct AWS region where your cluster will be created.



    4. Click Next

    5. In the next screen I am just going to select a Development control plane



    6. Click Next

    7. Edit the default-node-pool and add 2 worker nodes instead of just 1 as shown below



    8. Click "Create"

    9. This will take you to a screen where your cluster will create. This can take at least 20 minutes so be patient. Progress is shown as per below



    10. If we switch over to AWS console we will start to see some running instances and other cloud components being created as shown in the images below




    11. Eventually the cluster will create and you are taken to a summary screen for your cluster. It will take a few minutes for all "Agent and extensions health" to show up green so refresh the page serval times until all shows up green as per below.

    Note: This can take up to 10 minutes so be patient




    12. So to access this cluster using "kubectl" use the button "Access this Cluster" in the top right hand corner and it will take you to a screen as follows. Click the "Download kubeconfig file" and the "Tanzu Mission Control CLI" as you will need both those files and save them locally



    13. make the "tmc" CLI executable and save to your $PATH as shown below

    $ chmod +x tmc
    $ sudo mv tmc /usr/local/bin

    14. Access cluster using "kubectl" as follows
      
    $ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get namespaces
    NAME STATUS AGE
    default Active 19m
    kube-node-lease Active 19m
    kube-public Active 19m
    kube-system Active 19m
    vmware-system-tmc Active 17m

    Note: You will be taken to a web page to authenticate and once that's done your good to go as shown below


    15. You can view the pods created to allows access from the TMC agent as follows
      
    $ kubectl --kubeconfig=./kubeconfig-pas-aws-cluster.yml get pods --namespace=vmware-system-tmc
    NAME READY STATUS RESTARTS AGE
    agent-updater-7b47c659d-8h2mh 1/1 Running 0 25m
    agentupdater-workload-1581415620-csz5p 0/1 Completed 0 35s
    data-protection-769994df65-6cgfh 1/1 Running 0 24m
    extension-manager-657b467c-k4fkl 1/1 Running 0 25m
    extension-updater-c76785dc9-vnmdl 1/1 Running 0 25m
    inspection-extension-79dcff47f6-7lm5r 1/1 Running 0 24m
    intent-agent-7bdf6c8bd4-kgm46 1/1 Running 0 24m
    policy-sync-extension-8648685fc7-shn5g 1/1 Running 0 24m
    policy-webhook-78f5699b76-bvz5f 1/1 Running 1 24m
    policy-webhook-78f5699b76-td74b 1/1 Running 0 24m
    sync-agent-84f5f8bcdc-mrc9p 1/1 Running 0 24m

    So if you got this far you now have attached a cluster and created a cluster from scratch all from VMware TMC and that's just the start.

    Soon I will show to add some policies to our cluster now we have them under management

    More Information

    Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    VMware Tanzu Mission Control
    https://cloud.vmware.com/tanzu-mission-control
    Categories: Fusion Middleware

    Taking VMware Tanzu Mission Control for a test drive

    Pas Apicella - Mon, 2020-02-10 19:53
    You may or may not have heard of Tanzu Mission Control (TMC) part of the new VMware Tanzu offering which will help you build, run and manage modern apps. To find out more about Tanzu Mission Control here is the Blog link on that.

    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    In this blog I show you how easily you can use TMC to monitor your existing k8s clusters. Keep in mind TMC can also create k8s clusters for you but here we will use the "Attach Cluster" part of TMC. Demo as follows

    1. Of course you will need access account on TMC which for this demo I already have. Once logged in you will see a home screen as follows



    2. In the right hand corner there is a "Attach Cluster" button click this to attach an existing cluster to TMC. Enter some cluster details , in this case I am attaching to a k8s cluster on GKE and giving it a name "pas-gke-cluster".


    3. Click the "Register" button which takes you to a screen which allows you to install the VMware Tanzu Mission Control agent. This is simply done by using "kubectl apply ..." on your k8s cluster which allows an agent to communicate back to TMC itself. Everything is created in a namespace called "vmware-system-tmc"



    4. Once you have run the "kubectl apply .." on your cluster you can verify the status of the pods and other components installed as follows

    $ kubectl get all --namespace=vmware-system-tmc

    Or you could just check the status of the various pods as shown below and assume everything else was created ok
      
    $ kubectl get pods --namespace=vmware-system-tmc
    NAME READY STATUS RESTARTS AGE
    agent-updater-67bb5bb9c6-khfwh 1/1 Running 0 74m
    agentupdater-workload-1581383460-5dsx9 0/1 Completed 0 59s
    data-protection-657d8bf96c-v627g 1/1 Running 0 73m
    extension-manager-857d46c6c-zfzbj 1/1 Running 0 74m
    extension-updater-6ddd9858cf-lr88r 1/1 Running 0 74m
    inspection-extension-789bb48b6-mnlqj 1/1 Running 0 73m
    intent-agent-cfb49d788-cq8tk 1/1 Running 0 73m
    policy-sync-extension-686c757989-jftjc 1/1 Running 0 73m
    policy-webhook-5cdc7b87dd-8shlp 1/1 Running 0 73m
    policy-webhook-5cdc7b87dd-fzz6s 1/1 Running 0 73m
    sync-agent-84bd6c7bf7-rtzcn 1/1 Running 0 73m

    5. Now at this point click on "Verify Connection" button to confirm the agent in your k8s cluster is able to communicate with TMC

    6. Now let's search for out cluster on the "Clusters" page as shown below



    7. Click on "pas-gke-cluster" and you will be taken to an Overview page as shown below. Ensure all green tick boxes are in place this may take a few minutes so refresh the page as needed



    8. So this being an empty cluster I will create a deployment with 2 pods so we can see how TMC shows this workload in the UI. These "kubectl commands" should work on any cluster as the image is on Docker Hub

    $ kubectl run pbs-deploy --image=pasapples/pbs-demo-image --replicas=2 --port=8080
    $ kubectl expose deployment pbs-deploy --type=LoadBalancer --port=80 --target-port=8080 --name=pbs-demo-service

    9. Test the workload (Although this isn't really required)

    $ echo "http://`kubectl get svc pbs-demo-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`/customers/1"
    http://104.197.202.165/customers/1

    $ http http://104.197.202.165/customers/1
    HTTP/1.1 200
    Content-Type: application/hal+json;charset=UTF-8
    Date: Tue, 11 Feb 2020 01:43:26 GMT
    Transfer-Encoding: chunked

    {
        "_links": {
            "customer": {
                "href": "http://104.197.202.165/customers/1"
            },
            "self": {
                "href": "http://104.197.202.165/customers/1"
            }
        },
        "name": "pas",
        "status": "active"
    }

    10. Back on the TMC UI click on workloads. You should see our deployment as per below


    11. Click on the deployment "pbs-deploy" to see the status of the pods created as part of the deployment replica set plus the YAML of the deployment itself


    12. Of course this is just scratching the surface but from the other tabs you can see the cluster nodes, namespaces and other information as required not just for your workloads but also for the cluster itself




    One thing to note here is when I attach a cluster as shown in this demo the life cycle of the cluster, for example upgrades, can't be managed / performed by TMC. In the next post I will show how "Create Cluster" will actually be able to control the life cycle of the cluster as well as this time TMC will actually create the cluster for us.

    Stay tuned!!!

    More Information

    Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
    https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

    VMware Tanzu Mission Control
    https://cloud.vmware.com/tanzu-mission-control
    Categories: Fusion Middleware

    kubectl tree - A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences

    Pas Apicella - Sun, 2020-01-12 18:51
    A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences on them. To get started and install the plugin visit this page.

    https://github.com/ahmetb/kubectl-tree

    Install Steps

    Install as follows

    1. Create a script as follows

    install-krew.sh

    (
      set -x; cd "$(mktemp -d)" &&
      curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}" &&
      tar zxvf krew.tar.gz &&
      KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" &&
      "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&
      "$KREW" update
    )

    2. Install as follows

    papicella@papicella:~/pivotal/software/krew$ ./install-krew.sh
    +++ mktemp -d
    ++ cd /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tmp.kliHlfYB
    ++ curl -fsSLO 'https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}'
    ++ tar zxvf krew.tar.gz
    x ./krew-darwin_amd64
    x ./krew-linux_amd64
    x ./krew-linux_arm
    x ./krew-windows_amd64.exe
    x ./LICENSE
    +++ uname
    +++ tr '[:upper:]' '[:lower:]'
    ++ KREW=./krew-darwin_amd64
    ++ ./krew-darwin_amd64 install --manifest=krew.yaml --archive=krew.tar.gz
    Installing plugin: krew
    Installed plugin: krew

    ...

    3. On a Mac add the following to your PATH and source your profile file or start a new shell

    export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

    4. Check plugin is installed

    $ kubectl plugin list
    The following compatible plugins are available:

    /Users/papicella/.krew/bin/kubectl-krew
    /Users/papicella/.krew/bin/kubectl-tree

    Can also use this:

    $ kubectl tree --help
    Show sub-resources of the Kubernetes object

    Usage:
      kubectl tree KIND NAME [flags]

    Examples:
      kubectl tree deployment my-app
      kubectl tree kservice.v1.serving.knative.dev my-app

    6. Ok now it's installed let's see what it shows / displays information about k8s objects and relationships on my cluster which has riff and knative installed

    $ kubectl tree deployment --namespace=knative-serving networking-istio
    NAMESPACE        NAME                                       READY  REASON  AGE
    knative-serving  Deployment/networking-istio                -              8d
    knative-serving  └─ReplicaSet/networking-istio-7fcd97cbf7   -              8d
    knative-serving    └─Pod/networking-istio-7fcd97cbf7-z4dc9  True           8d

    $ kubectl tree deployment --namespace=riff-system riff-build-controller-manager
    NAMESPACE    NAME                                                    READY  REASON  AGE
    riff-system  Deployment/riff-build-controller-manager                -              8d
    riff-system  └─ReplicaSet/riff-build-controller-manager-5d484d5fc4   -              8d
    riff-system    └─Pod/riff-build-controller-manager-5d484d5fc4-7rhbr  True           8d


    More Information

    GitHub Tree Plugin
    https://github.com/ahmetb/kubectl-tree

    Categories: Fusion Middleware

    Spring Boot JPA project riff function demo

    Pas Apicella - Tue, 2019-12-17 22:09
    riff is an Open Source platform for building and running Functions, Applications, and Containers on Kubernetes. For more information visit the project riff home page https://projectriff.io/

    riff supports running containers using Knative serving which in turn provides support for
    •     0-N autoscaling
    •     Revisions
    •     HTTP routing using Istio ingress
    Want to try an example? If so head over to the following GitHub project which will show to do this step by step for Spring Data JPA function running using riff on a GKE cluster when required

    https://github.com/papicella/SpringDataJPAFunction


    More Information

    1. Project riff home page
    https://projectriff.io/

    2. Getting started with riff
    https://projectriff.io/docs/v0.5/getting-started

    Categories: Fusion Middleware

    k8s info: VMware Tanzu Octant - A web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters

    Pas Apicella - Tue, 2019-12-03 10:33
    Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities

    So how would I install this?

    1. First on my k8s cluster lets create a deployment and a service. You can skip this step if you already have workloads on your cluster. These commands will work on any cluster as the image exists on DockerHub itself so as long as you can get to DockerHub these kubectl commands will work.

    $ kubectl run pbs-demo --image=pasapples/pbs-demo-image --replicas=2 --port=8080
    $ kubectl expose deploy pbs-demo --type=LoadBalancer --port=80 --target-port=8080
    $ http http://101.195.48.144/customers/1

    HTTP/1.1 200
    Content-Type: application/hal+json;charset=UTF-8
    Date: Tue, 03 Dec 2019 16:11:54 GMT
    Transfer-Encoding: chunked

    {
        "_links": {
            "customer": {
                "href": "http://101.195.48.144/customers/1"
            },
            "self": {
                "href": "http://101.195.48.144/customers/1"
            }
        },
        "name": "pas",
        "status": "active"
    }

    2. To install Octant you can view instructions on the GitHub page as follows

    https://github.com/vmware-tanzu/octant

    Given I am on a Mac it's installed using brew as shown below. For other OS refer to link above

    $ brew install octant

    3. Thats it you can now launch the UI as shown below.

    $  octant

    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "deployment/configuration", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/containerEditor", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/serviceEditor", "module-name": "overview"}
    2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "octant/deleteObject", "module-name": "configuration"}
    2019-12-03T21:47:56.272+0530 INFO dash/dash.go:370 Using embedded Octant frontend
    2019-12-03T21:47:56.277+0530 INFO dash/dash.go:349 Dashboard is available at http://127.0.0.1:7777

    Octant should immediately launch your default web browser on 127.0.0.1:7777

    And to view our deployed application!!!!







    It's a nice UI and it even has the ability to switch to a different k8s context from the menu bar itself



    More Information

    1. Seeing is Believing: Octant Reveals the Objects Running in Kubernetes Clusters
    https://blogs.vmware.com/cloudnative/2019/08/12/octant-reveals-objects-running-in-kubernetes-clusters/

    2. GitHub project page
    https://github.com/vmware-tanzu/octant

    Categories: Fusion Middleware

    k8s info: kubectx and kubens to the rescue

    Pas Apicella - Tue, 2019-12-03 05:54
    kubectx is a utility to manage and switch between kubectl(1) contexts. To me this is so handy I can't live without it. I am constantly using k8s everywhere from PKS (Pivotal Container Service) clusters, GKE clusters, minikube and wherever I can get my hands on a cluster.

    So when I heard about kubectx and no I can't live with this and it makes my life so much easier. His how

    Where is my current k8s context and potentially what other contexts could I switch to?


    Ok so I am in the k8s cluster with the context of "apples". Let's switch to "lemons" then


    It's really as simple as that. In my world every k8s cluster is named after a FRUIT.

    Finally if you wish to set the correct context namespace you can use "kubens" to do that just as easily as shown below



    More Information

    https://github.com/ahmetb/kubectx

    https://formulae.brew.sh/formula/kubectx
    Categories: Fusion Middleware

    Joined the ranks of the 100+ CKA/CKAD certified Pivotal Platform Architects

    Pas Apicella - Tue, 2019-12-03 05:22
    I am now officially CKAD certified in fact I am Cloud Foundry certified as well. Great to be certified with the leaders in container technology both with PaaS and CaaS.





    Categories: Fusion Middleware

    Getting started with Pivotal Telemetry Collector

    Pas Apicella - Thu, 2019-10-17 18:44
    Pivotal Telemetry Collector is an automated tool that collects data from a series of Pivotal Cloud Foundry (PCF) APIs found within a foundation and securely sends that data to Pivotal. The tool collects:

    • Configuration data from the Ops Manager API.
    • Optional certificate data from the CredHub API.
    • Optional app, task and service instance usage data from the Usage Service API.

    Pivotal uses this information to do the following:

    • Improve its products and services.
    • Fix problems.
    • Advise customers on how best to deploy and use Pivotal products.
    • Provide better customer support.
    Steps to Run

    1. Download the scripts required to run "Pivotal Telemetry Collector" using this URL from Pivotal Network

    https://network.pivotal.io/products/pivotal-telemetry-collector/

    2. Extract to file system. You will notice 3 executables use the right one for your OS, in my case it was the Mac OSX executable "telemetry-collector-darwin-amd64"

    -rwxr-xr-x   1 papicella  staff  14877449  5 Oct 00:42 telemetry-collector-linux-amd64*
    -rwxr-xr-x   1 papicella  staff  14771312  5 Oct 00:42 telemetry-collector-darwin-amd64*
    -rwxr-xr-x   1 papicella  staff  14447104  5 Oct 00:42 telemetry-collector-windows-amd64.exe*

    3. Make sure you have network access to your PCF env. You will need to hit the Operations Manager URL as well as the CF CLI API and usage service API endpoints as shown below

    Ops Manager endpoint

    $ ping opsmgr-02.haas-yyy.pez.pivotal.io
    PING opsmgr-02.haas-yyy.pez.pivotal.io (10.195.1.1): 56 data bytes
    64 bytes from 10.195.1.1: icmp_seq=0 ttl=58 time=338.412 ms

    CF API endpoint

    $ ping api.system.run.haas-yyy.pez.pivotal.io
    PING api.system.run.haas-yyy.pez.pivotal.io (10.195.1.2): 56 data bytes
    64 bytes from 10.195.1.2: icmp_seq=0 ttl=58 time=380.852 ms

    Usage Service API endpoint

    $ ping app-usage.system.run.haas-yyy.pez.pivotal.io
    PING app-usage.system.run.haas-yyy.pez.pivotal.io (10.195.1.3): 56 data bytes
    64 bytes from 10.195.1.3: icmp_seq=0 ttl=58 time=495.996 ms

    4. Now you can use this via two options. As you would of guessed we are using the CLI given we have downloaded the scripts.

    Concourse: https://docs.pivotal.io/telemetry/1-1/using-concourse.html
    CLI: https://docs.pivotal.io/telemetry/1-1/using-cli.html

    5. So to run out first collect we would run the collector script as follows. More information about what the CLI options are can be found on this link or using help option "./telemetry-collector-darwin-amd64 --help"

    https://docs.pivotal.io/telemetry/1-1/using-cli.html

    Script Name: run-with-usage.sh

    $ ./telemetry-collector-darwin-amd64 collect --url https://opsmgr-02.haas-yyy.pez.pivotal.io/ --username admin --password {PASSWD} --env-type production --output-dir output --usage-service-url https://app-usage.system.run.haas-yyy.pez.pivotal.io/ --usage-service-client-id push_usage_service --usage-service-client-secret {PUSH-USAGE-SERVICE-PASSWORD} --usage-service-insecure-skip-tls-verify --insecure-skip-tls-verify --cf-api-url https://api.system.run.haas-yyy.pez.pivotal.io

    Note: You would obtain the PUSH-USAGE-SERVICE-PASSWORD from Ops Manager PAS tile credentials tab as shown in screen shot below


    6. All set let's try it out

    $ ./run-with-usage.sh
    Collecting data from Operations Manager at https://opsmgr-02.haas-yyy.pez.pivotal.io/
    Collecting data from Usage Service at https://app-usage.system.run.haas-yyy.pez.pivotal.io/
    Wrote output to output/FoundationDetails_1571355194.tar
    Success!

    7. Let's extract the output TAR as follows

    $ cd output/
    $ tar -xvf FoundationDetails_1571355194.tar
    x opsmanager/ops_manager_deployed_products
    x opsmanager/pivotal-container-service_resources
    x opsmanager/pivotal-container-service_properties
    x opsmanager/pivotal-mysql_resources
    x opsmanager/pivotal-mysql_properties
    x opsmanager/cf_resources
    x opsmanager/cf_properties
    x opsmanager/p-compliance-scanner_resources
    x opsmanager/p-compliance-scanner_properties
    x opsmanager/ops_manager_vm_types
    x opsmanager/ops_manager_diagnostic_report
    x opsmanager/ops_manager_installations
    x opsmanager/ops_manager_certificates
    x opsmanager/ops_manager_certificate_authorities
    x opsmanager/metadata
    x usage_service/app_usage
    x usage_service/service_usage
    x usage_service/task_usage
    x usage_service/metadata

    7. Now let's view the output which is a SET of JSON files and to do that I simply use "cat" command and pipe that to JQ as shown below

    $ cat ./output/opsmanager/ops_manager_installations | jq -r
    {
      "installations": [
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "p-compliance-scanner-a53448be03a372a13d89",
              "identifier": "p-compliance-scanner",
              "label": "Compliance Scanner for PCF",
              "product_version": "1.0.0"
            }
          ],
          "deletions": [],
          "finished_at": "2019-08-30T09:38:29.679Z",
          "id": 25,
          "started_at": "2019-08-30T09:21:44.810Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [
            {
              "change_type": "deletion",
              "deployment_status": "pending",
              "guid": "p-compliance-scanner-1905a6707e4f434e315a",
              "identifier": "p-compliance-scanner",
              "label": "Compliance Scanner for PCF",
              "product_version": "1.0.0-beta.25"
            }
          ],
          "finished_at": "2019-08-08T02:10:51.130Z",
          "id": 24,
          "started_at": "2019-08-08T02:09:10.290Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-07-18T12:27:54.301Z",
          "id": 23,
          "started_at": "2019-07-18T11:31:19.781Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": [
            {
              "change_type": "update",
              "deployment_status": "successful",
              "guid": "cf-3095a0a264aa5900d79f",
              "identifier": "cf",
              "label": "Small Footprint PAS",
              "product_version": "2.5.3"
            }
          ]
        },
        {
          "additions": [],
          "deletions": [
            {
              "change_type": "deletion",
              "deployment_status": "pending",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "finished_at": "2019-07-07T00:16:31.948Z",
          "id": 22,
          "started_at": "2019-07-07T00:04:32.974Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-07-07T00:02:12.003Z",
          "id": 21,
          "started_at": "2019-07-06T23:57:06.401Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": [
            {
              "change_type": "update",
              "deployment_status": "failed",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ]
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "p-compliance-scanner-1905a6707e4f434e315a",
              "identifier": "p-compliance-scanner",
              "label": "Compliance Scanner for PCF",
              "product_version": "1.0.0-beta.25"
            }
          ],
          "deletions": [],
          "finished_at": "2019-06-10T09:23:19.595Z",
          "id": 20,
          "started_at": "2019-06-10T09:10:44.431Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "skipped",
              "guid": "aquasec-1b94477ae275ee81be58",
              "identifier": "aquasec",
              "label": "Aqua Security for PCF",
              "product_version": "1.0.0"
            }
          ],
          "deletions": [],
          "finished_at": "2019-06-06T17:38:18.396Z",
          "id": 19,
          "started_at": "2019-06-06T17:35:34.614Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "skipped",
              "guid": "aquasec-1b94477ae275ee81be58",
              "identifier": "aquasec",
              "label": "Aqua Security for PCF",
              "product_version": "1.0.0"
            }
          ],
          "deletions": [],
          "finished_at": "2019-06-06T17:33:18.545Z",
          "id": 18,
          "started_at": "2019-06-06T17:21:41.529Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T11:15:43.546Z",
          "id": 17,
          "started_at": "2019-06-04T10:49:57.969Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T10:44:04.018Z",
          "id": 16,
          "started_at": "2019-06-04T10:17:28.230Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "failed",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T09:52:30.782Z",
          "id": 15,
          "started_at": "2019-06-04T09:48:45.867Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "failed",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T09:21:17.245Z",
          "id": 14,
          "started_at": "2019-06-04T09:17:45.360Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "failed",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T08:50:33.333Z",
          "id": 13,
          "started_at": "2019-06-04T08:47:09.790Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T08:32:44.772Z",
          "id": 12,
          "started_at": "2019-06-04T08:23:27.386Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T08:16:41.757Z",
          "id": 11,
          "started_at": "2019-06-04T08:13:54.645Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "failed",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T01:53:50.594Z",
          "id": 10,
          "started_at": "2019-06-04T01:43:56.205Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": [
            {
              "change_type": "update",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ]
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-04T01:28:22.975Z",
          "id": 9,
          "started_at": "2019-06-04T01:24:52.587Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-06-03T08:37:25.961Z",
          "id": 8,
          "started_at": "2019-06-03T08:13:07.511Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": [
            {
              "change_type": "update",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ]
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "pas-windows-72031f60ab052fa4d473",
              "identifier": "pas-windows",
              "label": "Pivotal Application Service for Windows",
              "product_version": "2.5.2"
            }
          ],
          "deletions": [],
          "finished_at": "2019-06-03T04:57:06.897Z",
          "id": 7,
          "started_at": "2019-06-03T03:52:13.705Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "pivotal-mysql-0e5d717f1c87c8095c9d",
              "identifier": "pivotal-mysql",
              "label": "MySQL for Pivotal Cloud Foundry v2",
              "product_version": "2.5.4-build.51"
            }
          ],
          "deletions": [],
          "finished_at": "2019-05-22T05:15:55.703Z",
          "id": 6,
          "started_at": "2019-05-22T04:09:49.841Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "cf-3095a0a264aa5900d79f",
              "identifier": "cf",
              "label": "Small Footprint PAS",
              "product_version": "2.5.3"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-05-22T02:12:22.934Z",
          "id": 5,
          "started_at": "2019-05-22T01:45:28.101Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            },
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "cf-3095a0a264aa5900d79f",
              "identifier": "cf",
              "label": "Small Footprint PAS",
              "product_version": "2.5.3"
            }
          ],
          "updates": []
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "failed",
              "guid": "cf-3095a0a264aa5900d79f",
              "identifier": "cf",
              "label": "Small Footprint PAS",
              "product_version": "2.5.3"
            }
          ],
          "deletions": [],
          "finished_at": "2019-05-22T00:23:29.844Z",
          "id": 4,
          "started_at": "2019-05-21T23:16:42.418Z",
          "status": "failed",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [],
          "deletions": [],
          "finished_at": "2019-05-16T01:50:50.640Z",
          "id": 3,
          "started_at": "2019-05-16T01:45:22.438Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": [
            {
              "change_type": "update",
              "deployment_status": "successful",
              "guid": "pivotal-container-service-5c28f63410227c2221c8",
              "identifier": "pivotal-container-service",
              "label": "Enterprise PKS",
              "product_version": "1.4.0-build.31"
            }
          ]
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "pivotal-container-service-5c28f63410227c2221c8",
              "identifier": "pivotal-container-service",
              "label": "Enterprise PKS",
              "product_version": "1.4.0-build.31"
            }
          ],
          "deletions": [],
          "finished_at": "2019-05-15T00:08:32.241Z",
          "id": 2,
          "started_at": "2019-05-14T23:33:58.105Z",
          "status": "succeeded",
          "unchanged": [
            {
              "change_type": "unchanged",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "updates": []
        },
        {
          "additions": [
            {
              "change_type": "addition",
              "deployment_status": "successful",
              "guid": "p-bosh-c1853604618b1b3e10fd",
              "identifier": "p-bosh",
              "label": "BOSH Director",
              "product_version": "2.5.3-build.185"
            }
          ],
          "deletions": [],
          "finished_at": "2019-05-14T23:29:47.525Z",
          "id": 1,
          "started_at": "2019-05-14T23:13:13.244Z",
          "status": "succeeded",
          "unchanged": [],
          "updates": []
        }
      ]
    }

    Optionally you should send this TAR file output on every ticket/case your create so support has a great snapshot of what your ENV looks like to help diagnose support issues for you.

    telemetry-collector send --path --api-key

    For the API-KEY please contact your Pivotal AE or Platform Architect to request that as the Telemetry team issues API key to customer's


    More Information 

    https://docs.pivotal.io/telemetry/1-1/index.html
    Categories: Fusion Middleware

    Basic VMware Harbor Registry usage for Pivotal Container Service (PKS)

    Pas Apicella - Tue, 2019-09-24 01:25
    VMware Harbor Registry is an enterprise-class registry server that stores and distributes container images. Harbor allows you to store and manage images for use with Enterprise Pivotal Container Service (Enterprise PKS).

    In this simple example we show what you need at a minimum to get an image on Harbor deployed onto your PKS cluster. First we need the following to be able to run this basic demo

    Required Steps

    1. PKS installed with Harbor Registry tile added as shown below


    2. VMware Harbor Registry integrated with Enterprise PKS as per the link below. The most important step is the one as follows "Import the CA Certificate Used to Sign the Harbor Certificate and Key to BOSH". You must complete that prior to creating a PKS cluster

    https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html

    3. A PKS cluster created. You must have completed step #2 before you create the cluster

    https://docs.pivotal.io/pks/1-4/create-cluster.html

    $ pks cluster oranges

    Name:                     oranges
    Plan Name:                small
    UUID:                     21998d0d-b9f8-437c-850c-6ee0ed33d781
    Last Action:              CREATE
    Last Action State:        succeeded
    Last Action Description:  Instance provisioning completed
    Kubernetes Master Host:   oranges.run.yyyy.bbbb.pivotal.io
    Kubernetes Master Port:   8443
    Worker Nodes:             4
    Kubernetes Master IP(s):  1.1.1.1
    Network Profile Name:

    4. Docker Desktop Installed on your local machine



    Steps

    1. First let's log into Harbor and create a new project. Make sure you record your username and password you have assigned for the project. In this example I make the project public.




    Details

    • Project Name: cto_apj
    • Username: pas
    • Password: ****

    2. Next in order to be able to connect to our registry from our local laptop we will need to install

    The VMware Harbor registry isn't running on a public domain, and is using a self-signed certificate. So we need to access this registry with self-signed certificates from my mac osx clients given I am using Docker for Mac. This link shows how to add the self signed certificate to Linux and Mac clients

    https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac

    You can download the self signed cert from Pivotal Ops Manager as sown below


    With all that in place a command as follows is all I need to run

    $ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt

    3. Now lets login to the registry using a command as follows

    $ docker login harbor.haas-bbb.yyyy.pivotal.io -u pas
    Password:
    Login Succeeded

    4. Now I have an image sitting on Docker Hub itself so let's tag that and then deploy that to our VMware Harbor registry as shown below

     $ docker tag pasapples/customer-api:latest harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest
     $ docker push harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest


    5. Now lets create a new secret for accessing the container registry

    $ kubectl create secret docker-registry regcred --docker-server=harbor.haas-bbb.yyyy.io --docker-username=pas --docker-password=**** --docker-email=papicella@pivotal.io

    6. Now let's deploy this image to our PKS cluster using a deployment YAML file as follows

    customer-api.yaml

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: customer-api
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: customer-api
        spec:
          containers:
            - name: customer-api
              image: harbor.haas-206.pez.pivotal.io/cto_apj/customer-api:latest
              ports:
                - containerPort: 8080

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: customer-api-service
      labels:
        name: customer-api-service
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      selector:
        app: customer-api
      type: LoadBalancer

    7. Deploy as follows

    $ kubectl create -f customer-api.yaml

    8. You should see the POD and SERVICE running as follows

    $ kubectl get pods | grep customer-api
    customer-api-7b8fcd5778-czh46                    1/1     Running   0          58s

    $ kubectl get svc | grep customer-api
    customer-api-service            LoadBalancer   10.100.2.2    10.195.1.1.80.5   80:31156/TCP 


    More Information

    PKS Release Notes 1.4
    https://docs.pivotal.io/pks/1-4/release-notes.html

    VMware Harbor Registry
    https://docs.vmware.com/en/VMware-Enterprise-PKS/1.4/vmware-harbor-registry/GUID-index.html

    Categories: Fusion Middleware

    Taking kpack, a Kubernetes Native Container Build Service for a test drive

    Pas Apicella - Tue, 2019-09-10 23:51
    We wanted Build Service to combine the Cloud Native Buildpacks experience with the declarative model of Kubernetes, and extend the K8s workflow in an idiomatic fashion. With this goal in mind, we leveraged custom resource definitions to extended the K8s API. This way, we could use Kubernetes technology to create a composable, declarative architecture to power build service. The Custom Resource Definitions (CRDs) are coordinated by Custom Controllers to automate container image builds and keep them up to date based on user-provided configuration.

    So with that in mind lets go and deploy kpack on GKE cluster and build our first image...



    Steps

    1. Install v0.0.3 of kpack into your Kube cluster

    $ kubectl apply -f <(curl -L https://github.com/pivotal/kpack/releases/download/v0.0.3/release.yaml)

    ...

    namespace/kpack created
    customresourcedefinition.apiextensions.k8s.io/builds.build.pivotal.io created
    customresourcedefinition.apiextensions.k8s.io/builders.build.pivotal.io created
    clusterrole.rbac.authorization.k8s.io/kpack-admin created
    clusterrolebinding.rbac.authorization.k8s.io/kpack-controller-admin created
    deployment.apps/kpack-controller created
    customresourcedefinition.apiextensions.k8s.io/images.build.pivotal.io created
    serviceaccount/controller created
    customresourcedefinition.apiextensions.k8s.io/sourceresolvers.build.pivotal.io created

    2. Lets just verify what Custom resources definition (CRD's) have been installed

    $ kubectl api-resources --api-group build.pivotal.io
    NAME              SHORTNAMES                    APIGROUP           NAMESPACED   KIND
    builders          cnbbuilder,cnbbuilders,bldr   build.pivotal.io   true         Builder
    builds            cnbbuild,cnbbuilds,bld        build.pivotal.io   true         Build
    images            cnbimage,cnbimages            build.pivotal.io   true         Image
    sourceresolvers                                 build.pivotal.io   true         SourceResolver

    3. Create a builder resource as follows

    builder-resource.yaml

    apiVersion: build.pivotal.io/v1alpha1
    kind: Builder
    metadata:
      name: sample-builder
    spec:
      image: cloudfoundry/cnb:bionic
      updatePolicy: polling

    $ kubectl create -f builder-resource.yaml
    builder.build.pivotal.io/sample-builder created

    $ kubectl get builds,images,builders,sourceresolvers
    NAME                                      AGE
    builder.build.pivotal.io/sample-builder   42s

    4. Create a secret for push access to the desired docker registry

    docker-secret.yaml

    apiVersion: v1
    kind: Secret
    metadata:
      name: basic-docker-user-pass
      annotations:
        build.pivotal.io/docker: index.docker.io
    type: kubernetes.io/basic-auth
    stringData:
      username: papicella
      password:

    $ kubectl create -f docker-secret.yaml
    secret/basic-docker-user-pass created

    5. Create a secret for pull access from the desired git repository. The example below is for a github repository

    git-secret.yaml

    apiVersion: v1
    kind: Secret
    metadata:
      name: basic-git-user-pass
      annotations:
        build.pivotal.io/git: https://github.com
    type: kubernetes.io/basic-auth
    stringData:
      username: papicella
      password:

    $ kubectl create -f git-secret.yaml
    secret/basic-git-user-pass created

    6. Create a service account that uses the docker registry secret and the git repository secret.

    service-account.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: service-account
    secrets:
      - name: basic-docker-user-pass
      - name: basic-git-user-pass

    $ kubectl create -f service-account.yaml
    serviceaccount/service-account created

    7. Install logs utility. In order to view the build logs for each image as it's created right now you have to use a utility that you build from the kpack github repo source fils itself. Follow the steps below to get it built

    $ export GOPATH=`pwd`
    $ git clone https://github.com/pivotal/kpack $GOPATH/src/github.com/pivotal/kpack
    $ cd $GOPATH/src/github.com/pivotal/kpack
    $ dep ensure -v
    $ go build ./cmd/logs

    You will have "logs" executable created in current directory which we will use it shortly

    8. Create an image as follows. The GitHub repo I have here is public so will work no problem at all

    pbs-demo-sample-image.yaml

    apiVersion: build.pivotal.io/v1alpha1
    kind: Image
    metadata:
      name: pbs-demo-image
    spec:
      tag: pasapples/pbs-demo-image
      serviceAccount: service-account
      builderRef: sample-builder
      cacheSize: "1.5Gi" # Optional, if not set then the caching feature is disabled
      failedBuildHistoryLimit: 5 # Optional, if not present defaults to 10
      successBuildHistoryLimit: 5 # Optional, if not present defaults to 10
      source:
        git:
          url: https://github.com/papicella/pbs-demo
          revision: master
      build: # Optional
        env:
          - name: BP_JAVA_VERSION
            value: 11.*
        resources:
          limits:
            cpu: 100m
            memory: 1G
          requests:
            cpu: 50m
            memory: 512M

    $ kubectl create -f pbs-demo-sample-image.yaml
    image.build.pivotal.io/sample-image created

    9. Now at this point we can view the created image and current Cloud native Buildpack builds being run using two commands as follows.

    $ kubectl get images
    NAME             LATESTIMAGE   READY
    pbs-demo-image                 Unknown

    $ kubectl get cnbbuilds
    NAME                           IMAGE   SUCCEEDED
    pbs-demo-image-build-1-pvh6k           Unknown

    Note: Unknown is normal as it has not yet completed 

    10. Now using our created "logs" utility lets view the current build logs

    $ ./logs -image pbs-demo-image
    {"level":"info","ts":1568175056.446671,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
    source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "cee67e26d55b6d2735afd7fa3e0b81e251e0d5ce" in path "/workspace"
    2019/09/11 04:11:23 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
    ======== Results ========
    skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
    pass: org.cloudfoundry.openjdk@1.0.0-RC03
    pass: org.cloudfoundry.buildsystem@1.0.0-RC03
    pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
    pass: org.cloudfoundry.tomcat@1.0.0-RC03
    pass: org.cloudfoundry.springboot@1.0.0-RC03
    pass: org.cloudfoundry.distzip@1.0.0-RC03
    skip: org.cloudfoundry.procfile@1.0.0-RC03
    skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
    skip: org.cloudfoundry.debug@1.0.0-RC03
    skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
    skip: org.cloudfoundry.jdbc@1.0.0-RC03
    skip: org.cloudfoundry.jmx@1.0.0-RC03
    pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
    Resolving plan... (try #1)
    Success! (7)
    Cache '/cache': metadata not found, nothing to restore
    Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:40fe8aa932037faad697c3934667241eef620aac1d09fc7bb5ec5a75d5921e3e'
    Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'

    ......

    11. Now this will take some time to do our first build given it will hagve to download all the maven dependancies but you may be wondering how do we determine how many builds have been run so we can actually view the logs of any builds across our image we just created. To do that run a command as follows

    $ kubectl get pods --show-labels | grep pbs-demo-image
    pbs-demo-image-build-1-pvh6k-build-pod   0/1     Init:6/9   0          6m29s   image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image

    12. So from the output above you can clearly see we just have the one single build so to view logs of just a particular build we use it's ID as shown above as follows

    $ ./logs -image pbs-demo-image -build {ID}

    ...

    13. Now if we wait at least 5 minutes as the first build will always take time just to the dependancies required to be downloaded it will eventually complete and show it's complete using the following commands

    $ kubectl get images
    NAME             LATESTIMAGE                                                                                                        READY
    pbs-demo-image   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

    $ kubectl get cnbbuilds
    NAME                           IMAGE                                                                                                              SUCCEEDED
    pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

    14. Now let's actually make a code change to our source code and issue a git commit. In this example below I am using IntelliJ IDEA for my code change/commit


    15. Now let's see if a new build is kicked off it should be. Run the following command

    $ kubectl get cnbbuilds
    NAME                           IMAGE                                                                                                              SUCCEEDED
    pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

    pbs-demo-image-build-2-stl8w                                                                                                                      Unknown


    16. Now lets see that in fact this new build is build ID 2 using a command as follows

    $ kubectl get pods --show-labels | grep pbs-demo-image
    pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          21m     image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image
    pbs-demo-image-build-2-stl8w-build-pod   0/1     Init:6/9    0          2m15s   image.build.pivotal.io/buildNumber=2,image.build.pivotal.io/image=pbs-demo-image

    17. Lets view the logs for BUILD 2 as follows

    $ ./logs -image pbs-demo-image -build 2
    {"level":"info","ts":1568176191.088838,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
    source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "e2830bbcfb32bfdd72bf5d4b17428c405f46f3c1" in path "/workspace"
    2019/09/11 04:29:55 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
    ======== Results ========
    skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
    pass: org.cloudfoundry.openjdk@1.0.0-RC03
    pass: org.cloudfoundry.buildsystem@1.0.0-RC03
    pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
    pass: org.cloudfoundry.tomcat@1.0.0-RC03
    pass: org.cloudfoundry.springboot@1.0.0-RC03
    pass: org.cloudfoundry.distzip@1.0.0-RC03
    skip: org.cloudfoundry.procfile@1.0.0-RC03
    skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
    skip: org.cloudfoundry.debug@1.0.0-RC03
    skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
    skip: org.cloudfoundry.jdbc@1.0.0-RC03
    skip: org.cloudfoundry.jmx@1.0.0-RC03
    pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
    Resolving plan... (try #1)
    Success! (7)
    Restoring cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
    Restoring cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
    Restoring cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
    Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
    Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
    Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe'
    Using cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
    Using cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
    Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
    Using cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
    Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
    Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
    Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
    Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
    Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'

    Cloud Foundry OpenJDK Buildpack 1.0.0-RC03
      OpenJDK JDK 11.0.4: Reusing cached layer
      OpenJDK JRE 11.0.4: Reusing cached layer

    Cloud Foundry Build System Buildpack 1.0.0-RC03
        Using wrapper
        Linking Cache to /home/cnb/.m2
      Compiled Application: Contributing to layer
        Executing /workspace/mvnw -Dmaven.test.skip=true package
    [INFO] Scanning for projects...
    [INFO]
    [INFO] ------------------------< com.example:pbs-demo >------------------------
    [INFO] Building pbs-demo 0.0.1-SNAPSHOT
    [INFO] --------------------------------[ jar ]---------------------------------
    [INFO]
    [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---

    ...

    18. Now this build won't take as long as the first build as this time we don't have to pull down the maven dependancies plus avoid creating layers that have not changes in the first OCI complaint image which is something cloud native buildpacks does for us nicely. Once complete you now have two builds as follows

    $ kubectl get cnbbuilds
    NAME                           IMAGE                                                                                                              SUCCEEDED
    pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True
    pbs-demo-image-build-2-stl8w   index.docker.io/pasapples/pbs-demo-image@sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f   True

    19. Now let's run this locally as follows given I have docker desktop running. First we pull down the created image which in this case is the LATEST build build 2 here



    $ docker pull pasapples/pbs-demo-image
    Using default tag: latest
    latest: Pulling from pasapples/pbs-demo-image
    35c102085707: Already exists
    251f5509d51d: Already exists
    8e829fe70a46: Already exists
    6001e1789921: Already exists
    76a30c9e6d47: Pull complete
    8538f1fe6188: Pull complete
    2a899c7e684d: Pull complete
    0ea0c38329cb: Pull complete
    bb281735f842: Pull complete
    664d87aab7ff: Pull complete
    f4b03070a779: Pull complete
    682af613b7ca: Pull complete
    b893e5904080: Pull complete
    Digest: sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f
    Status: Downloaded newer image for pasapples/pbs-demo-image:latest

    20. Now let's run it

    $ docker run -p 8080:8080 pasapples/pbs-demo-image

      .   ____          _            __ _ _
     /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
    ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
     \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
      '  |____| .__|_| |_|_| |_\__, | / / / /
     =========|_|==============|___/=/_/_/_/
     :: Spring Boot ::        (v2.1.6.RELEASE)

    2019-09-11 04:40:41.747  WARN 1 --- [           main] pertySourceApplicationContextInitializer : Skipping 'cloud' property source addition because not in a cloud
    2019-09-11 04:40:41.751  WARN 1 --- [           main] nfigurationApplicationContextInitializer : Skipping reconfiguration because not in a cloud
    2019-09-11 04:40:41.760  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Starting PbsDemoApplication on 5975633400c4 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)

    ...

    2019-09-11 04:40:50.255  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
    2019-09-11 04:40:50.259  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Started PbsDemoApplication in 8.93 seconds (JVM running for 9.509)
    Hibernate: insert into customer (id, name, status) values (null, ?, ?)
    2019-09-11 04:40:50.323  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=1, name=pas, status=active)
    Hibernate: insert into customer (id, name, status) values (null, ?, ?)
    2019-09-11 04:40:50.326  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=2, name=lucia, status=active)
    Hibernate: insert into customer (id, name, status) values (null, ?, ?)
    2019-09-11 04:40:50.329  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=3, name=lucas, status=inactive)
    Hibernate: insert into customer (id, name, status) values (null, ?, ?)
    2019-09-11 04:40:50.331  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=4, name=siena, status=inactive)

    21. Invoke it through a browser as follows

    http://localhost:8080/swagger-ui.html


    22. Finally let's actually run this application on our k8s cluster itself. So start by creating a basic YAML file for deployment as follows

    run-pbs-image-k8s-yaml.yaml

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: pbs-demo-image
    spec:
      replicas: 2
      template:
        metadata:
          labels:
            app: pbs-demo-image
        spec:
          containers:
            - name: pbs-demo-image
              image: pasapples/pbs-demo-image
              ports:
                - containerPort: 8080

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: pbs-demo-image-service
      labels:
        name: pbs-demo-image-service
    spec:
      ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
      selector:
        app: pbs-demo-image
      type: LoadBalancer

    23. Apply your config

    $ kubectl create -f run-pbs-image-k8s-yaml.yaml
    deployment.extensions/pbs-demo-image created
    service/pbs-demo-image-service created

    24. Check we have running pods and LB service created

    $ kubectl get all
    NAME                                         READY   STATUS      RESTARTS   AGE
    pod/pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          39m
    pod/pbs-demo-image-build-2-stl8w-build-pod   0/1     Completed   0          19m
    pod/pbs-demo-image-f5c9d989-l2hg5            1/1     Running     0          48s
    pod/pbs-demo-image-f5c9d989-pfxzs            1/1     Running     0          48s

    NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    service/kubernetes               ClusterIP      10.101.0.1              443/TCP        86m
    service/pbs-demo-image-service   LoadBalancer   10.101.15.197        80:30769/TCP   49s

    NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/pbs-demo-image   2/2     2            2           49s

    NAME                                      DESIRED   CURRENT   READY   AGE
    replicaset.apps/pbs-demo-image-f5c9d989   2         2         2       50s


    More Information

    Introducing kpack, a Kubernetes Native Container Build Service
    https://content.pivotal.io/blog/introducing-kpack-a-kubernetes-native-container-build-service

    Cloud Native Buildpacks
    https://buildpacks.io/
    Categories: Fusion Middleware

    Pages

    Subscribe to Oracle FAQ aggregator - Fusion Middleware