From the man page:
-sn (No port scan)
Systems administrators often find this option valuable as well. It can easily be used to count available machines on a network or monitor server availability.
Scan an entire subnet
nmap -sn 192.168.1.0/24
Scan a range of IP addresses
nmap -sn 192.168.1.1-10
$ nmap -sn 192.168.1.1-5
Starting Nmap 7.31 ( https://nmap.org ) at 2017-08-03 18:55 IST
Nmap scan report for 192.168.1.1
Host is up (0.0067s latency).
Nmap scan report for 192.168.1.2
Host is up (0.0069s latency).
Nmap scan report for 192.168.1.3
Host is up (0.0065s latency).
Nmap done: 5 IP addresses (3 hosts up) scanned in 1.23 seconds
You can also use -sP. It was known as -sP in older releases of nmap
At a former workplace, the DB2 team were furious because they couldn’t connect to the database remotely. They have been constantly trying to telnet into a port on the server and they couldn’t. They server wasn’t listening on the port they were trying to telnet into. They have rebooted the server twice and nobody thought of checking the listening ports. The solution was a simple restart of the DB service.
Ping the server. If the server does not respond to ping, access the server locally by way of RDP, IMM , RSA ,CIMC or UCS Manager KVM Console and troubleshoot from there. If the server does reply to ping, scan the port. Continue reading
I am unable to open the web interface of IMM for an X series IBM server. I could ping it. I could ssh to it. So the IMM was clearly not hung. But nmap scan did not return port 80 or 443.
Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2016-08-26 11:10 CDT
Interesting ports on IMM_HOSTNAME (IMM_IP):
Not shown: 1675 closed ports
PORT STATE SERVICE
22/tcp open ssh
23/tcp open telnet
427/tcp open svrloc
3389/tcp open ms-term-serv
3900/tcp open udt_os
nmap scan from a working IMM Continue reading
I am trying to mount a NAS volume in Linux and it is backgrounding.
mount.nfs: backgrounding "NAS_IP:/vol/path"
mount.nfs: mount options: "bg,intr,tcp,timeo=600,vers=3,retrans=2,addr=NAS_IP"
I have got a task to clear subnets off non-VMware hosts so we can turn on dhcp, PXE for Auto Deploy. I have identified inactive IPs and our DNS admin reclaimed them. I need a way to verify the active IPs are VMware ESXi.
My first method is to ssh to the IPs and run vmware -v in a for loop.
for i in `cat list`
ssh -q -o "BatchMode yes" root@$i vmware -v
We have ssh keys setup but for every subnet there are a bunch of hosts where I cannot ssh with keys. I do not have the time to fix them all.