Table of Contents

  • Introduction
  • Scraping with GoogleScraper
  • Data Analysis
  • Conclusion


In the following article we will demonstrate the powers of GoogleScraper by pursuing a sample market analysis for the correlation of fashion brand names and the worlds current top models (both male and female).

So what exactly are we going to do?

We will search two big search engines, namely Google and Bing for fashion brand names as

  1. Levi Strauss
  2. Coach
  3. Phillips-Van Heusen
  4. Estée Lauder
  5. Richemont
  6. Christian Dior
  7. The Gap
  8. Kering
  9. H&M
  10. LVMH

in combination with the names of the world top 50 male and female models as listed by

This means we will iterate over all 10 brand names and totally 100 model names (which sums up to 1000 queries) with two search engines (Bing and Google, as mentioned above), which amounts to 2000 keywords to search for.

One such example keyword looks like this: "Amanda Murphy Levis Strauss"

Thus, an exerpt of the keyword file looks something like this:

Caroline Brasch Nielsen Christian Dior
Caroline Brasch Nielsen The Gap
Caroline Brasch Nielsen Kering
Caroline Brasch Nielsen H&M
Caroline Brasch Nielsen LVMH
Chiharu Okunugi Levi Strauss
Chiharu Okunugi Coach
Chiharu Okunugi Phillips-Van Heusen
Chiharu Okunugi Estée Lauder
Chiharu Okunugi Richemont
Chiharu Okunugi Christian Dior

What questions do we hope to answer with this scraping project?

I am not really knowledgable when it comes to fashion brand market analysis, so I won't be your man to draw the correct conclusions from the data that we are going to scrape.

Additionally, it's even quite debatable if this scrape endavour even does make sense alltogether. But I hope to find some answers to the following questions:

  • Which brand earns the most search hits all in all?
  • Which online store is listed most on SERP pages all in all?
  • Are the results in Bing and the Google search engine fundamentally different?
  • And many more :D

The technical side: Scraping with GoogleScraper

Extracting the models to scrape from our data source

Well now let's look at the technical part and the way we are going to extract the data. First of all, we need to extract the 100 models name from the site I will code for this purpose a short jQuery selector inside the script console of Firebug:

This is what I came up with to extract the model names from the site:

jQuery('.capdiv > a').each(function (index) {
  console.log( $( this ).text() );

So apply it to both sites, once for the Top 50 female models and once for the top 50 male models. If we've done this, save the list of the 100 models to a local text file.

Working with Firebug and entering the javascript code should look similar to this:

Extracting results with jQuery

Creating the file with the keywords to scrape for

Now we have a file 'models.txt' with the top 50 female and top 50 male models. No we need to generat the keywords that GoogleScrape can process (on each line a keyword). For this task, I just made the following small Python script that combines each name of the model with the 10 brand names that I listed above (I am sure there are more elegant ways, for example with itertools).

brands = [
    'Levi Strauss',
    'Phillips-Van Heusen',
    'Estée Lauder',
    'Christian Dior',
    'The Gap',

with open('keywords.txt', 'wt') as outfile:
    for model in open('models.txt', 'r'):
        for brand in brands:
            model = model.strip()
            brand = brand.strip()
            if model and brand:
                s = '{model} {brand}\n'.format(model=model, brand=brand)

Then run the following commands in your shell, cmd, or whatever tool you use

nikolai@nikolai:~/Projects/private/ScrapingProjects$ python3 
nikolai@nikolai:~/Projects/private/ScrapingProjects$ wc -l keywords.txt 
1070 keywords.txt
nikolai@nikolai:~/Projects/private/ScrapingProjects$ head keywords.txt 
Amanda Murphy Levi Strauss
Amanda Murphy Coach
Amanda Murphy Phillips-Van Heusen
Amanda Murphy Estée Lauder
Amanda Murphy Richemont
Amanda Murphy Christian Dior
Amanda Murphy The Gap
Amanda Murphy Kering
Amanda Murphy H&M
Amanda Murphy LVMH

So we just created our keyword file with 1070 keywords in total (The reason why we got more than the expected 1000 keywords is that there were slightly more than 50 female models on our source site).

Setting up GoogleScraper

As promised, we well use the tool GoogleScraper that I developed over the last year. You need to have Python 3.4 installed and a shell to work in. Then fire the following commands in a shell to install GoogleScraper:

virtualenv --python python3 env
source env/bin/activate
pip install GoogleScraper

The above command will have installed GoogleScraper in a virtual environment.

We are going to do the scrape with three different IP addresses (two SOCKS5 proxies that I set up on my two VPS servers) as well as my own ip address. This makes it possible to scrape with 6 open browser windows, without using the same IP address with the same seach provider concurrently. So for each automated browser instance (Steered with the Selenium framework for the curios ones) we use exactly one IP address. And each of these browsers would need to request 1070/30=356 keywords all in all.

The scraping

The scraping process itself isn't very interesting except from one minor fact: It seems that Google is much more sensitve when it comes to automated searches in comparasion to Bing. In my tests, Google would show the captcha much earlier than Bing. I am quite positive that this behaviour changed over the last few months (Maybe because of this tool? With 250 stars on Github it is popular enough to have a negative impact and thus piss of people @Google).

When I used GoogleScraper for one of my previous customers, scraping Google with automated browsers was very easy and one could use up to 15 browser threads and request around 3 keywords per second (Remember: By using just one ip address). Now I cannot get anywhere near this, when using more than one request per 5 seconds (15 times slower), Google will instantly ban me. So your either need quite a lot of proxies or good scraping strategy.

Bing on the other hand still has no big restrictions :D

The results

To give you a little preview of the results:

Data Analysis - Playing with the results

Now that we scraped the data, it's time to play with it.

You can accesse an interactive shell with GoogleScraper by calling

GoogleScraper --shell

This will give you a ipython3 session with some sqlalchemy variables to inspect the results. These variables are:

  • session: A sqlalchemy session.
  • ScraperSearch: This represents a call of the GoogleScrape application. It saves the time the scraping needed and has an element serps: A link to all the SERP that we found in the session.
  • SERP: This is a representation of a Search engine results page. It has a handle to the links.
  • Link: What we want. Stores the link, the snippet and the shown url of an entry in the serp page.

I assume that you are now in the shell session:

To list all searches that we issued:

>>> session.query(ScraperSearch).all()

# Assuming that our search has index 1
>>> search = session.query(ScraperSearch).get(1)

# Now list all serp pages
>>> search.serps

# And to iterate through all links use
>>> for serp in search.serps:
>>>     for link in serp.links:
>>>         print link

# Alternatively you can get all urls like this:
>>> links = [ for serp in search.serps for link in serp.links]

Now that we have a basic understanding of how we can work with the data, lets try answer our questions that we asked ourselves in the begging of this blog post!

We could of course work again in the sqlalchemy shell that GoogleScraper provides, but it's better to use a database administration tool like phpmyadmin. I will use the command line tool sqlite3, since I like working in the shell and because I am quite used to it. Let's get confortable with our tool by firing up some basic queries:

# open the database in the tool
sqlite3 google_scraper.db

# show the schema of the database
sqlite> .schema
CREATE TABLE scraper_search (
    number_search_engines_used INTEGER, 
    used_search_engines VARCHAR, 
    number_proxies_used INTEGER, 
    number_search_queries INTEGER, 
    started_searching DATETIME, 
    stopped_searching DATETIME, 
    PRIMARY KEY (id)
    search_engine_name VARCHAR, 
    scrapemethod VARCHAR, 
    page_number INTEGER, 
    requested_at DATETIME, 
    requested_by VARCHAR, 
    num_results INTEGER, 
    "query" VARCHAR, 
    num_results_for_keyword VARCHAR, 
    PRIMARY KEY (id)
CREATE TABLE scraper_searches_serps (
    scraper_search_id INTEGER, 
    serp_id INTEGER, 
    FOREIGN KEY(scraper_search_id) REFERENCES scraper_search (id), 
    FOREIGN KEY(serp_id) REFERENCES serp (id)
    title VARCHAR, 
    snippet VARCHAR, 
    link VARCHAR, 
    domain VARCHAR, 
    visible_link VARCHAR, 
    rank INTEGER, 
    link_type VARCHAR, 
    serp_id INTEGER, 
    PRIMARY KEY (id), 
    FOREIGN KEY(serp_id) REFERENCES serp (id)

# how many serp pages do we have using Google as a search engine?
sqlite> select count(*) from serp where search_engine_name = 'google';

# more specifically: How may unique queries have we with the search engine google? 1000 were expected...
sqlite> select count(distinct query) from serp where search_engine_name = 'google';

# what are the domains that appeared most frequently in all scraped urls and at least fifty times? 
sqlite> select domain, count(domain) as num_domain_appeared from link group by domain having num_domain_appeared > 50 order by num_domain_appeared desc;

Now that we are familiar with sql, lets answer our question that we asked ourselves earlier:

Which brand earns the most search hits all in all?

This is slightly complicated because we need to parse the brand name out of the initial search query and we also need to parse the number of results that the search engine delivered based on our search query. That's why we will use the a separate script again, so we can store stuff in variables and solve to problem in a programmers approach.

Note that we use some handy candy that python provides for us

  • The Counter class from the collections module
  • The pretty print function: pprint

That's the script I made to count the oveall search hits for each brand:

# -*- coding: utf-8 -*-

import re
from GoogleScraper.database import get_session, SERP
from collections import Counter
from pprint import pprint

brands = [
    'Levi Strauss',
    'Phillips-Van Heusen',
    'Estée Lauder',
    'Christian Dior',
    'The Gap',

num_res = re.compile(r'(?P<numr>[\d\.,]+) (results|Ergebnisse)')

# your path to the database that GoogleScraper saved
Session = get_session(path='/home/nikolai/Projects/private/GoogleScraper/google_scraper.db')
session = Session()

counter = Counter()

for serp in session.query(SERP).all():
    for brand in brands:
        if brand in serp.query:
                number ='numr')
                counter[brand] += int(re.sub(r',\d+', '', number).replace('.', ''))
            except AttributeError:


The output is the following:

[('The Gap', 568178074),
 ('Kering', 494447823),
 ('H&M', 371373925),
 ('Coach', 217229804),
 ('Christian Dior', 98413899),
 ('Richemont', 51080484),
 ('Estée Lauder', 51067537),
 ('LVMH', 40017347),
 ('Levi Strauss', 33992170),
 ('Phillips-Van Heusen', 5209493)]

To conclude: Obviously, the brand name "The Gap" earns a lot more search hits than for example "Phillips-Van Heusen". This seems to be right, because we can check it shortly by googling these queries manually:

  • Phillips-Van Heusen: 392.000
  • The Gap: 129.000.000

Which online store is listed most on SERP pages all in all?

Well to answer this question we would need a list of online stores to sort by. Because I don't have this specific info, I cannot answer the question :)

But I am optimistic that someone that actually works in this industry will figure out how to do it by approaching the matter with the techniques outlined here.

Are the results in Bing and the Google search engine fundamentally different?

Well to answer this question, here is a list of the most frequent domains found in the link from Google and Bing. I got the data by using the following sql query in the sqlite3 shell:

# Google most frequent domains
sqlite> select link.domain, count(link.domain) as cdom from link join serp on link.serp_id = group by link.domain having cdom > 30 and serp.search_engine_name == 'google' order by cdom desc;|1818|988|502|342|338|322|266|220|162|144|126|124|122|112|108|106|102|100|98|92|84|78|76|74|72|70|70|67|66|64|64|62|60|58|56|54|54|52|50|50|50|50|48|48|48|46|46|44|42|42|40|40|38|38|38|36|36|36|36|36|36|36|36|34|34|32|32

and now for Bing:

# Bing most frequent domains
sqlite> select link.domain, count(link.domain) as cdom from link join serp on link.serp_id = group by link.domain having cdom > 30 and serp.search_engine_name == 'bing' order by cdom desc;|1572|1468|1006|952|708|662|582|494|456|422|400|398|332|300|288|242|204|202|194|188|188|186|180|168|162|150|142|132|126|106|96|94|94|88|82|72|72|72|68|68|68|68|68|66|66|66|64|58|58|58|58|56|56|54|52|52|50|50|50|50|48|46|46|46|42|40|40|40|38|38|38|38|36|36|36|36|34|34|32|32


We have seen that GoogleScraper allows us to process a lot of data in short time. The more proxies you have, the more you can scrape. The above investigation isn't very interesting, but if you are creative, you may dig for many hidden valuable pieces of information :)