More and more web applications have published APIs that enable developers to easily integrate their data and functionality to make existing data more dynamic through the creation of useful mashups. The result is that thousands of mashed up websites and applications now depend on these APIs to work right all the time, and when an API goes down it creates a downtime domino effect in real time. Sites everywhere fail, but companies with open APIs have an obligation to ensure their APIs are up at least 99.9% of the time.
Today, we released a new API report revealing the uptime of 50 of the most popular “mashed up” APIs as ranked by ProgrammableWeb, including Google Maps, Google Search, Flickr, Twitter, YouTube, Amazon, eBay, Facebook, Microsoft Virtual Earth, and Wikipedia. The good folks over at ProgrammableWeb currently list over 3,000 open APIs and nearly 5,700 available mash-ups on their site, and there are even more on the market.
Ten APIs performed perfectly with 100% uptime during the reporting period including Basecamp, Delicious, eBay, goo.gl, Google Buzz, Google Charts, Google Maps, Google Search, Quora, and SimpleGeo. Two of the APIs, GeoNames (97.4%) and Eventful (97.2%) performed poorly with nearly a full day of downtime each. Ailing MySpace was at the bottom of the list with 94.3% uptime, the equivalent of nearly two full days of downtime.
The WatchMouse website for monitoring popular APIs, API-status.com, reports on monitoring checks for a valid result on each of the APIs, and if the result is wrong or is received after four seconds, it is noted as an error and unavailable. The percentage of availability or uptime is based on the number of errors reported. Additional details on API-status.com include a seven-day history along with a 24-hour snapshot and performance indication across multiple countries.
30-Day Report Card and Methodology
We monitored the availability of 50 API/cloud web services from February 16, 2011 to March 17, 2011. In accordance with industry standards, availability of greater than or equal to 99.9% is regarded as “good”, while anything below 99% is regarded as “poor” site uptime. The methodology for testing the sites includes one simple API call and check for a valid result. This typically means an authentication action for most APIs, including a login, followed by a search or listing action, plus a check of the expected result action. The expected result can immediately return as an error or if the expected result action is reported after four seconds, it is also logged as an error. These errors are used to create the percentage of availability or uptime for each of the sites. Each site is checked in real time using the WatchMouse API Monitoring technology which can be used to measure and report the availability of any public API or web service. The resulting data is made public with the WatchMouse Public Status Pages. Companies like Twitter and Wikipedia use the tool, which is hosted on the Amazon cloud platform to inform customers and report publicly on the status of their services.