Cloud Native Planet

Get Adobe Flash player

thumbThis week I had the pleasure of taking a deeper look at Morpheus Data’s Cloud Management and PaaS solution. I deployed it in my home lab and it went as smooth as a babies bottom. Very quick install and very simple to complete…. It basically does most of the work for you… So to pass on this knowledge I recorded my step in a Quick Install Guide and demonstrate Install, Config and a quick deployment of an instance…. All in all took less than 2 hours…

You should try it too, you love it.

 

Download the guide here > Morpheus-Quick-Install-Guide (188)

 

2 hours… i think about that… compare that to other products… I think this will be the first post of many…

 

 

Very proud to announce my first sponsor of my new blog blog.cloudnativeplanet.com Morpheus Data. I was very lucky to see a demo of Morpheus Data and was wowed by how feature rich the solution is. The person running the demo just happened to be an old friend of mine from Veeam James Smith. Morpheus is full blown CMP/PaaS or PaaS/CMP not which way round you class it but I saw it as a PaaS product. If you ever sat in one of my PaaS presentations you’d know I look for certain characteristics in a PaaS solution and Morpheus is there IMO. At later stage, I’m going to get a chance to play with this product myself but some words from me on the subject:

 

Morpheus Data is a company that has come to my attention during my research into CMP solutions. From what I can tell so far, Morpheus has the most complete support of both public and private clouds and allows customers to deploy operating systems, applications or containers into any of the supported platforms equally easily. As well as providing a very simple to use self-service portal to deploy these ‘instance’s it also allows you to build application stacks of multiple components and deploy these at the click of a button. The product also has integrations in tools that are very popular in the DevOps world, things like Ansible, Chef, Puppet to allow workflows to be run at deployment time or at any time after an instant or application has been deployed.

 

Not only does Morpheus claim to make deployments very easy, but the product also offers monitoring, log capture and backup for the instances as well.

 

The complete list of supported clouds, integrations and infrastructures tools is very extensive and if it can do everything it claims, this could be a very interesting product. “

 

Watch out for more good stuff from Morpheus Data.

 

So this week I started tinkering with the cloud foundry API. In all honesty I did have to get a little help from one of our developers because I was doing something wrong. Forgetting that in my lab I don’t have trusted certificates I wasn’t using a flag in the headers below to ignore that. Anyway just a slight glitch.

First you need to get your endpoint URL by asking for info from your CF API

 

Get Endpoint:

Get https://API.pcfsys.domain.local/info

This should give you a payload with the endpoint listed. Using the endpoint url now post a payload like the one below substituting your domain and your admin user/password.

 

Get token:

POST https://login.pcfsys.domain.local/oauth/token

Headers:

Content-Type:application/x-www-form-urlencoded

Accept:application/json;charset=utf-8

Authorization:Basic Y2Y6

X-UAA-Endpoint:https://login.domain.local

rejectUnauthorized:false

requestCert:false

agent:false

Body:

grant_type=password&username=admin&password=somepassword

 

At this point you should be presented with payload with your access token listed. Now you can use the access token to Get stuff. Like a list of the CF orgs.

 

List orgs:

Get https://API.pcfsys.domain.local/v2/organizations

Headers:

Accept:application/json;charset=utf-8

Authorization:bearer eyJhbGciOiJSUzI1…………..

 

List Spaces:

Get https://API.pcfsys.domain.local/v2/spaces

Headers = same as orgs

 

List Apps:

Get https://API.pcfsys.domain.local/v2/apps

Headers = same as orgs

 

List Users:

Get https://API.pcfsys.domain.local/v2/users

Headers = same as orgs

 

OK how about something a bit more like a query? How about we list all the space for a given user?

When you called the list of users you should be presented with a payload that also includes the guid for each user. Using this guid you can list the spaces for that user.

 

List spaces for specific user using the guid:

Get https://API.pcfsys.domain.local/v2/users/94aa5e3b-86e1-4ca0-90b1-4734d6555ef2/spaces

Headers = same as orgs

 

Simple right… watch this space for more on the Cloud Foundry API

So after many attempts at getting Pivotal Cloud Foundry (PCF) install in my vSphere home lab I finally mastered it. There are a few tweaks not mentioned on any educational material or docs that you learn through trial and error. So I figured what I learned along the way would make a great install guide. If you want to deploy PCF in your home lab you need to follow this guide > PCF-Quick-Install-Guide (845)

 

Reach out to me if you have feedback or need advice. Good hunting!

 

In Docker to allow Docker clients to access the Docker server (daemon) over the network you have to switch the Docker daemon from using a local socket to a tcp port.

 

First make sure docker is stopped:

 

$service docker stop

 

In the training I watched the command was:

 

$docker –H IPaddress:Port –d &

 

For example:

 

$docker –H 192.168.0.67:2375 –d &

 

2375 being the default non-ssl port.

 

This didn’t work for me and after digging around I found the latest syntax for this line looks like:

 

$docker daemon –H 192.168.0.67:2375 &

 

My advice though is run the daemon on both the socket and the tcp port as for testing its easier when you are on the docker host system. The command to do this is:

 

$docker daemon –H unix:///var/run/docker.sock –H 192.168.0.67:2375 &

 

You may receive an error about an existing docker.pid file that exists in /var/run . I just deleted using:

 

$rm /var/run/docker.pid

 

Then run the previous command again to fire up Docker on both socket and tcp port.

 

$docker daemon –H unix:///var/run/docker.sock –H 192.168.0.67:2375 &

So I wanted to test a Go Web running on Cloud Foundry and I’m still homing my skills in this area. I needed a simple hello world web app and found some Go examples on the web that use the net/http package.

 

package main

import (

“fmt”

“net/http”

)

func handler(w http.ResponseWriter, r *http.Request) {

fmt.Fprintf(w, “Hi there, this is a Go App running on CF %s!”, r.URL.Path[1:])

}

func main() {

http.HandleFunc(“/”, handler)

http.ListenAndServe(“:8080”, nil)

}


I ran into an issue which was a school boy error and fundamental to Cloud Foundry. If you view the above code you will notice I hard coded the http listeners to port 8080. WaWARRR.. The mistake was presuming I could get a route bag to my CF container from the outside world on a specific port number.

 

CF will give each instance of your app container a dynamical port number. The key in your code to make use of an environment variable called “PORT”. To do that you need to first import the “os” package so you can make use of the “Getenv” function to retrieve the value of the assigned port. Now my code looks like this:

 

package main

import (

“fmt”

“net/http”

“os”

)

func handler(w http.ResponseWriter, r *http.Request) {

fmt.Fprintf(w, “Hi there Ricky, this is a Go App running on CF %s!”, r.URL.Path[1:])

}

func main() {

http.HandleFunc(“/”, handler)

http.ListenAndServe(“:”+os.Getenv(“PORT”), nil)

}

 

 

Now when you access your application through cloud foundry, CF will route you through to the correct port.

This week I decided to try to install Pivotal Cloud Foundry in my lab. First step of the installing Ops Manager was successful but every time I tried to deploy the Elastic Runtime component I got an error:

 

Error 400007: `diego_cell-partition-4950f67e43d8dc1414ba/0 (922615a1-eb95-4603-9360-df75631365db)’ is not running after update. Review logs for failed jobs: rep

 

This error meant nothing to me and a logged a ticket on the Pivotal forum.

 

Trying to analyse what was going on myself the thought that it was my lab kept nagging me. I knew I have enough memory and storage. The storage connected to my lab 2TB in in size is a 1Gb SCSI LUNs made of SATA disks.

 

I had an idea and I managed to install Elastic Runtime successfully but I had to introduce an SSD disk into my lab for it to work.  What made me think of putting PCF on SSD is just the time it took to deploy in my lab, I wondered if there was latency and PCF was timing out on the install of Elastic Runtime.

 

Well looks like that SSD expansion I was planning for my Synology storage array will have to come sooner than later.

While setting up PCF in my home lab I noticed a quirk. You have to create a network to be used and specify a subnet for PCF to assign IPs from. Personally I would rather specify a range rather than a whole subnet. As a result you have to exclude out ranges that might be used by other VMs/Systems/Devices on your network. I did so for different subnets and stopped at .254 knowing that .255 is the broadcast address for that subnet.  Whooops I think I found a bug, because any other software would know not to use the broadcast address. Time to ping my mates at Pivotal.

2016-03-31_19-48-08

Hello All.

Welcome to this new blog. I am a veteran blogger in the virtualisation space but I started to reskill in Cloud Native technologies like Cloud Foundry and Docker. So I decided to start blogging about my experience. Watch this space for education and interesting bits on Cloud Native tech.

Banner Adverts