How to find the number of processors from dmidecode output?

If you have access to the system, you would probably use lscpu, nproc or /proc/cpuinfo. How do you find the number of available CPUs in a Linux server from dmidecode output if that is all you have?

$ grep CPU dmidecode_output.txt 
Socket Designation: CPU 0
Socket Designation: CPU 1
Socket Designation: CPU 2
Socket Designation: CPU 3
Socket Designation: CPU 4
Socket Designation: CPU 5
Socket Designation: CPU 6
Socket Designation: CPU 7
Advertisements

How to add multiple gateways in Linux

I have two interfaces eth0 and eth1. They belong to different vlans. They are accessible from their respective vlan. I have a need to access eth1 from the other vlan and it doesn’t work. Inter-vlan routing is configured in the switch and ideally it should work. When I ping eth1 from the other vlan, it timeout. This suggests the problem is not reaching eth1 but the response is unable to come through. When the server tries to respond the icmp traffic it receives at eth1, it tries to send it via the default gateway which belongs to eth0. Hence there is a need to configure per interface gateway. Windows let you do this easily.

To fix this, I need to configure policy based routing using iproute2. I followed the steps from here. The steps involve creating routing table for each interface, a route, default gateway for each subnet and then rules for each network.

As the referenced blog has a neat document on the configuration and commands to use, I don’t think I need to duplicate it. Just two notes though-

  1. You need to add the tables to /etc/iproute2/rt_tables before you can add the route and rule
  2. To inspect the route for each table run

    ip route show table tablename

Manage IPMI remotely with SMCIPMITool

If the IPMI webpage isn’t loading, you can run ipmitool locally from the OS that is installed on the server. If both are inaccessible but the IPMI remains accessible from the network, SMCIPMITool could be you friend. Unlike IMM and CIMC, IPMI doesn’t provide ssh.

You can grab a copy of SMCIPMITool from Supermicro. You can run it in two modes- shell mode and command mode.

Examples: Continue reading

Importing a Workstation VM to AHV fails with NFS3ERR_NOENT error

To import a Workstation VM to AHV, KVM based Nutanix Hypervisor from Nutanix, I copied it to AHV container by WinSCPing to the the Prism IP:2222. While converting the vmdk to AHV format from Image Configuration, I got an error:

NFS: Lookup of /path/filename-flat.vmdk failed with NFS3ERR_NOENT(-2)

The error code NFS3ERR_NOENT means NFS is unable to find the file.

From RFC 1813:

NFS3ERR_NOENT
       No such file or directory. The file or directory name
       specified does not exist.

The solution is to use .vmdk and not -flat.vmdk during conversion. The default disk format in VMware Workstation is monolithic sparse which is a single growable file.

From here:

VIXDISKLIB_DISK_MONOLITHIC_SPARSE – Growable virtual disk contained in a single virtual disk file. This is the default type for hosted disk, and the only setting in the Virtual Disk API Sample Code sample program.

Nutanix: VM based Foundation stuck at 67%

Foundation process using the Foundation VM 3.5 got stuck at 67% for more than an hour for one node. The other two nodes completed successfully. There were no update in the logs available at /home/nutanix/foundation/log/ in the Foundation VM for as long as it got stuck. The log for the problem node stood still here.

20170131 03:53:59 INFO Installation of Acropolis base software successful: Installation successful.
20170131 03:53:59 INFO Rebooting node. This may take several minutes: Rebooting node. This may take several minutes
20170131 03:53:59 INFO INFO: Rebooting node. This may take several minutes

From IPMI, I saw the node rebooted and hypervisor had been installed but not the CVM. I rebooted it again which did not change anything.

As there was nothing I could do, I killed the foundation process and restarted it which fixed the issue. Continue reading