Just a heads up that there is a an official statement from Google about your robots.txt file with a rarely discussed comment about how to disallow the spider from indexing sub-directories
They will have a series of posts about how to manage access to your site and its pages so subscribe or come back here and I’ll try to keep you apprised.
Official Google Blog: Controlling how search engines access and index your website: “The key is a simple file called robots.txt that has been an industry standard for many years. It lets a site owner control how search engines access their web site. With robots.txt you can control access at multiple levels — the entire site, through individual directories, pages of a specific type, down to individual pages. Effective use of robots.txt gives you a lot of control over how your site is searched, but its not always obvious how to achieve exactly what you want. This is the first of a series of posts on how to use robots.txt to control access to your content”