When Backspace key does not work

Select and delete or retype. I have been doing that on my 2012 Acer Aspire One 725, including while typing this.


Nutanix, Ravello, Cisco Cloud Lab and Tech Field day

When I learned about Nutanix last year, I was glued to it the whole day. Nutanix is a Hyper-Converged Infrastructure (HCI) solution which supports kvm, ESXi and Hyper-V. HCI is a system in which compute and storage are tightly coupled as opposed to Converged Infrastructure (such as vBlock) where each component- compute, storage etc.. can be independently used. Last year they launched their hypervisor called Acropolis based on kvm. Their management GUI Prism is html5, No flash/No Java. It’s pretty nice, seems like they have done what Red Hat couldn’t do with RHEV-M.

This is a nice intro video on Nutanix. If you want to give it a spin, try the Community Edition as Nested Virtualization on ESXi. You can also spin up a Nutanix instance on Ravello.

Ravello is another interesting company founded by the founders of Quamranet, the guys who made kvm. They let you run hypervisors in the cloud- AWS and GCE. The last time I tried, you can get upto 8G RAM for free. Oracle has acquired Ravello.

Cisco Demo Cloud
If you are looking for an NX-OS simulator to learn networking, look no further than Cisco Demo Cloud lab. All you need is a Cisco.com account, which is free! Once you log in, lookout for “Cisco Nexus 7000: Introduction to Cisco NX-OS v1”

Tech Field Day
This is a valid reason to watch YouTube at work! If you have not heard of Tech Field Day, search it on YouTube. Startups and big tech companies come to showcase their new stuff on Tech Field day. It may not be your field but it will give you the awareness

Ravello at Tech Field Day
Cisco ACI at Tech Field Day

How to do a batch forward and reserve lookup using dig and host

For this example, I have a list of FQDNs and IPs in two files namely hostnames and ips. We will look at how to do a bulk query using dig and host in 6 examples.

1. Forward lookup using dig in batchmode and return only IPs

dig -f hostnames +short

2. Forward lookup using dig in batchmode and return A records

dig -f hostnames +noall +answer
rtfmp107.example.com. 479  IN      A
rtfmp109.example.com. 5    IN      A
rtfmp111.example.com. 900  IN      A
rtfmp113.example.com. 804  IN      A
rtfmp115.example.com. 804  IN      A
rtfmp117.example.com. 186  IN      A
rtfmp119.example.com. 5    IN      A
rtfmp139.example.com. 900  IN      A
rtfmp141.example.com. 4    IN      A
rtfmp143.example.com. 1    IN      A
rtfmp145.example.com. 1    IN      A

3. Bulk reverse lookup with dig and xargs

cat ips | xargs -Ih dig +noall +answer -x  h 900 IN      PTR     rtfmp107.example.com. 900 IN      PTR     rtfmp109.example.com. 900 IN      PTR     rtfmp111.example.com. 900 IN      PTR     rtfmp113.example.com. 900 IN      PTR     rtfmp115.example.com. 900 IN      PTR     rtfmp117.example.com. 900 IN      PTR     rtfmp119.example.com. 900 IN      PTR     rtfmp139.example.com. 900 IN      PTR     rtfmp141.example.com. 900 IN      PTR     rtfmp143.example.com. 352 IN      PTR     rtfmp145.example.com.

4. To extract just the names

cat ips  | xargs -Ih dig +noall +answer -x  h | awk '{print $5}' | sed 's/com./com/g'

5. Forward lookup with host and xargs for a list of hostnames

cat hostnames | xargs -Ih host h
rtfmp107.example.com has address
rtfmp109.example.com has address
rtfmp111.example.com has address
rtfmp113.example.com has address
rtfmp115.example.com has address
rtfmp117.example.com has address
rtfmp119.example.com has address
rtfmp139.example.com has address
rtfmp141.example.com has address
rtfmp143.example.com has address
rtfmp145.example.com has address

6. Reserve lookup with host and xargs for a list of hostnames

cat ips | xargs -Ih host h domain name pointer rtfmp107.example.com. domain name pointer rtfmp109.example.com. domain name pointer rtfmp111.example.com. domain name pointer rtfmp113.example.com. domain name pointer rtfmp115.example.com. domain name pointer rtfmp117.example.com. domain name pointer rtfmp119.example.com. domain name pointer rtfmp139.example.com. domain name pointer rtfmp141.example.com. domain name pointer rtfmp143.example.com. domain name pointer rtfmp145.example.com.

Megaraid, media errors

Application(Hadoop) logs I/O errors:

2016-02-15 02:48:04,911 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage for block pool: \
BP-   2136893094-Server_IP-1400619662809 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used \
block  storage: /path
ExitCodeException exitCode=1: du: cannot access `/path': Input/output error
du: cannot access `/path': Input/output error
du: cannot access `/path': Input/output error

SCSI reports “Medium Error” in /var/log/messages:

Feb 15 02:47:04 hostame kernel: EXT4-fs error (device sdi): __ext4_get_inode_loc: unable to read inode \
block -  inode=50331696, block=201326626
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Unhandled sense code
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Sense Key : Medium Error [current]
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Add. Sense: No additional sense information
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] CDB: Read(10): 28 00 60 00 01 10 00 00 08 00

Continue reading

Run Openshift Origin master as a container with proxy

Openshift master can be deployed as a container. In fact, it is the only way to run it on RHEL Atomic host. I deploy the open source version Origin as a container on Atomic host following this guide. When I try to create a new project, it could not download the image from Docker hub. The host does not have direct access to the Internet. I went to the user mailing list and opened a github issue. Thanks to the good guys at Red Hat, the solution is to pass HTTP_PROXY and HTTPS_PROXY environment variables to the Docker run command with -e option.

docker run -d --name "origin" --privileged --pid=host --net=host 
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys 
-v /var/lib/docker:/var/lib/docker:rw 
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes 
-e HTTP_PROXY=http://proxy.xxx.com:8080  
-e HTTPS_PROXY=http://proxy.xxx.com:8080 
 openshift/origin start