I realized, only shortly after posting the news of the new sitemap autodiscovery protocol that there are few resources that explain the very simple steps that need to be taken to set up autodiscovery through your robots.txt file.
Assuming you’ve named the map “sitemap.xml,” simply add the following line to the robots.txt file located at the highest level of your web server
Sitemap: https://www.YOURDOMAIN.com/sitemap.xml
It does not matter where in the file you place that code.
If you have more than one sitemap, you should have a sitemap index file, the location of which should be listed instead. What is a sitemap index file? If you have more than 100 pages you want crawled through a sitemap, you should breakup your map into smaller lists of less than 100 URLs. An index file points the bots to each sitemap.
Spiders will automatically find your sitemap when they hit your domain and read the robots.txt file. Now, technically, there is no need to submit them through Site Explorer or Webmaster Central though, there is no penalty for doing so. I keep my work with Yahoo! and Google live as I appreciate the other data and shortcuts they make available and let autodiscovery work in the background. Should you continue to submit, by the way, the little known sitemap submission to Ask is through this URL:
https://submissions.ask.com/ping?sitemap=https://www.YOURDOMAIN.com/sitemap.xml
Thanks Paul. I have been looking for clarification on this.