With the rapid global growth in the consumerization of information technology monitoring your application or service has grown increasingly complex. Simply monitoring your solution from one or a few location doesn’t provide effective visibility into the global availability nor does it identify were connectivity issue may be occurring within the internet.
I evaluated options such as services that monitoring your application from multiple locations like AlertSite and solutions that generate statistical analysis like Google Analytics but couldn’t monitor internet fabrics. The real-time web monitor at Akamai was closer but didn’t provide enough granularity. I needed to report & filter my monitoring data by the following data dimensions so I build a ETL data warehouse solution.
- Date time
- Client public IP address
- Number of workstations behind each public IP address
- Client internet service provider
- Client location represented as geospatial longitude and latitude
- Client location represented as City, Region, Country
- Client public IP reverse DNS lookup
- Client average response time
- Correlated relationships between client ISP networks (ARIN)
- ARIN network identity information
- A lifecycle that dynamically adds and removes data
- Identify BGP prefix’s within AS numbers and dynamically generate routing topologies and monitor for changes
- Collect performance and availability times for all internet fabric paths and dynamically represent that data
- Collect real internet fabric network topologies by identifying actual routes that packets flow
- Granularity to the point that a single client internet connection can be identified and analyzed all the way back to the service endpoint.
- Broad reporting to alert and identify broad issue with peering relationships, and the internet in general.
Here is screenshot of what the beta of this technology looks like, the example below is clients connecting to services at iQmetrix.