In addition to crawling & indexing web pages via submitted URLs, Google recently launched a feature that uses RSS and Atom feeds to discover new web pages. This helps Google index new webpages faster than traditional methods. As a result, you must make sure that crawling of you feeds isn’t disallowed by your robots.txt. To find out if Googlebot can crawl your feeds, test your feed URLs with robots.txt tester. Going forward, we might also explore mechanisms such as PubSubHubbub to identify updated items, explained Google.