When Backspace key does not work

Select and delete or retype. I have been doing that on my 2012 Acer Aspire One 725, including while typing this.

Advertisements

Nutanix, Ravello, Cisco Cloud Lab and Tech Field day

Nutanix
When I learned about Nutanix last year, I was glued to it the whole day. Nutanix is a Hyper-Converged Infrastructure (HCI) solution which supports kvm, ESXi and Hyper-V. HCI is a system in which compute and storage are tightly coupled as opposed to Converged Infrastructure (such as vBlock) where each component- compute, storage etc.. can be independently used. Last year they launched their hypervisor called Acropolis based on kvm. Their management GUI Prism is html5, No flash/No Java. It’s pretty nice, seems like they have done what Red Hat couldn’t do with RHEV-M.

This is a nice intro video on Nutanix. If you want to give it a spin, try the Community Edition as Nested Virtualization on ESXi. You can also spin up a Nutanix instance on Ravello.

Ravello
Ravello is another interesting company founded by the founders of Quamranet, the guys who made kvm. They let you run hypervisors in the cloud- AWS and GCE. The last time I tried, you can get upto 8G RAM for free. Oracle has acquired Ravello.

Cisco Demo Cloud
If you are looking for an NX-OS simulator to learn networking, look no further than Cisco Demo Cloud lab. All you need is a Cisco.com account, which is free! Once you log in, lookout for “Cisco Nexus 7000: Introduction to Cisco NX-OS v1”

Tech Field Day
This is a valid reason to watch YouTube at work! If you have not heard of Tech Field Day, search it on YouTube. Startups and big tech companies come to showcase their new stuff on Tech Field day. It may not be your field but it will give you the awareness

Examples:
Ravello at Tech Field Day
Cisco ACI at Tech Field Day

How to do a batch forward and reserve lookup using dig and host

For this example, I have a list of FQDNs and IPs in two files namely hostnames and ips. We will look at how to do a bulk query using dig and host in 6 examples.

1. Forward lookup using dig in batchmode and return only IPs

dig -f hostnames +short
192.168.1.148
192.168.1.149
192.168.1.150
192.168.1.151
192.168.1.152
192.168.1.153
192.168.1.154
192.168.1.181
192.168.1.182
192.168.1.183
192.168.1.184

2. Forward lookup using dig in batchmode and return A records

dig -f hostnames +noall +answer
rtfmp107.example.com. 479  IN      A       192.168.1.148
rtfmp109.example.com. 5    IN      A       192.168.1.149
rtfmp111.example.com. 900  IN      A       192.168.1.150
rtfmp113.example.com. 804  IN      A       192.168.1.151
rtfmp115.example.com. 804  IN      A       192.168.1.152
rtfmp117.example.com. 186  IN      A       192.168.1.153
rtfmp119.example.com. 5    IN      A       192.168.1.154
rtfmp139.example.com. 900  IN      A       192.168.1.181
rtfmp141.example.com. 4    IN      A       192.168.1.182
rtfmp143.example.com. 1    IN      A       192.168.1.183
rtfmp145.example.com. 1    IN      A       192.168.1.184

3. Bulk reverse lookup with dig and xargs

cat ips | xargs -Ih dig +noall +answer -x  h
148.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp107.example.com.
149.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp109.example.com.
150.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp111.example.com.
151.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp113.example.com.
152.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp115.example.com.
153.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp117.example.com.
154.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp119.example.com.
181.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp139.example.com.
182.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp141.example.com.
183.1.168.192.in-addr.arpa. 900 IN      PTR     rtfmp143.example.com.
184.1.168.192.in-addr.arpa. 352 IN      PTR     rtfmp145.example.com.

4. To extract just the names

cat ips  | xargs -Ih dig +noall +answer -x  h | awk '{print $5}' | sed 's/com./com/g'
rtfmp107.example.com
rtfmp109.example.com
rtfmp111.example.com
rtfmp113.example.com
rtfmp115.example.com
rtfmp117.example.com
rtfmp119.example.com
rtfmp139.example.com
rtfmp141.example.com
rtfmp143.example.com
rtfmp145.example.com

5. Forward lookup with host and xargs for a list of hostnames

cat hostnames | xargs -Ih host h
rtfmp107.example.com has address 192.168.1.148
rtfmp109.example.com has address 192.168.1.149
rtfmp111.example.com has address 192.168.1.150
rtfmp113.example.com has address 192.168.1.151
rtfmp115.example.com has address 192.168.1.152
rtfmp117.example.com has address 192.168.1.153
rtfmp119.example.com has address 192.168.1.154
rtfmp139.example.com has address 192.168.1.181
rtfmp141.example.com has address 192.168.1.182
rtfmp143.example.com has address 192.168.1.183
rtfmp145.example.com has address 192.168.1.184

6. Reserve lookup with host and xargs for a list of hostnames

cat ips | xargs -Ih host h
148.1.168.192.in-addr.arpa domain name pointer rtfmp107.example.com.
149.1.168.192.in-addr.arpa domain name pointer rtfmp109.example.com.
150.1.168.192.in-addr.arpa domain name pointer rtfmp111.example.com.
151.1.168.192.in-addr.arpa domain name pointer rtfmp113.example.com.
152.1.168.192.in-addr.arpa domain name pointer rtfmp115.example.com.
153.1.168.192.in-addr.arpa domain name pointer rtfmp117.example.com.
154.1.168.192.in-addr.arpa domain name pointer rtfmp119.example.com.
181.1.168.192.in-addr.arpa domain name pointer rtfmp139.example.com.
182.1.168.192.in-addr.arpa domain name pointer rtfmp141.example.com.
183.1.168.192.in-addr.arpa domain name pointer rtfmp143.example.com.
184.1.168.192.in-addr.arpa domain name pointer rtfmp145.example.com.

Megaraid, media errors

Application(Hadoop) logs I/O errors:

2016-02-15 02:48:04,911 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage for block pool: \
BP-   2136893094-Server_IP-1400619662809 : BlockPoolSliceStorage.recoverTransitionRead: attempt to load an used \
block  storage: /path
ExitCodeException exitCode=1: du: cannot access `/path': Input/output error
du: cannot access `/path': Input/output error
du: cannot access `/path': Input/output error

SCSI reports “Medium Error” in /var/log/messages:

Feb 15 02:47:04 hostame kernel: EXT4-fs error (device sdi): __ext4_get_inode_loc: unable to read inode \
block -  inode=50331696, block=201326626
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Unhandled sense code
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Sense Key : Medium Error [current]
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] Add. Sense: No additional sense information
Feb 15 02:47:33 hostame kernel: sd 0:2:8:0: [sdi] CDB: Read(10): 28 00 60 00 01 10 00 00 08 00

Continue reading

Run Openshift Origin master as a container with proxy

Openshift master can be deployed as a container. In fact, it is the only way to run it on RHEL Atomic host. I deploy the open source version Origin as a container on Atomic host following this guide. When I try to create a new project, it could not download the image from Docker hub. The host does not have direct access to the Internet. I went to the user mailing list and opened a github issue. Thanks to the good guys at Red Hat, the solution is to pass HTTP_PROXY and HTTPS_PROXY environment variables to the Docker run command with -e option.

docker run -d --name "origin" --privileged --pid=host --net=host 
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys 
-v /var/lib/docker:/var/lib/docker:rw 
-v /var/lib/origin/openshift.local.volumes:/var/lib/origin/openshift.local.volumes 
-e HTTP_PROXY=http://proxy.xxx.com:8080  
-e HTTPS_PROXY=http://proxy.xxx.com:8080 
 openshift/origin start