It seems the general consensus is in favor of Python, and I agree for a number of reasons. You mentioned Python's Beautiful Soup, which is likely one of the more publicly used and developed scraping libraries. As some of the other comments have mentioned, it takes a lot of the pain out of the process for you, and with the size of the Python (and Beautiful Soup) community you will likely never run into a problem that there isn't already a solution to. Aside from Beautiful Soup, Scrapy is another widely used crawling/scraping platform used in Python.
Python also plays well with Mongodb, in that there are a lot of packages to use them together - there are some for Beautiful Soup that bring MongoDB with it. And MongoDB is probably the database you'll work with if you're planning on web scraping, at least thats the DB I've seen clients use the most when doing scraping work.
Ruby isn't a bad choice either, with Nokogiri and Mechanize. Both of which are Ruby gems that are seen a lot (in regards to Ruby) in web scraping applications.
Both Ruby and Python are simple enough to jump into, you would be wrong in choosing either. Although, I'd suggest Python in the end.
If you feel like it, share your web scraping project with us, I'd be interested in seeing it!