Preview only show first 10 pages with watermark. For full document please download

Ultimate Scrapebox Advantage

   EMBED


Share

Transcript

Ultimate

Scrapebox
Advantage




Ultimate Scrapebox Advantage
Contents
Legal Stuff/Disclaimer ............................................................................................................................. 4
Introduction ............................................................................................................................................ 5
Basics ....................................................................................................................................................... 6
Stuff you should already know ........................................................................................................... 6
Resources you need before Reading this guide .............................................................................. 6
Other Youtube resources that can help you ................................................................................... 6
Scrapebox ................................................................................................................................................ 7
The Tool .............................................................................................................................................. 7
Anchor Text and the Names Field ................................................................................................... 9
Settings Menu ..................................................................................................................................... 9
Options Menu ................................................................................................................................... 10
The Add-Ons ...................................................................................................................................... 10
An Important Word on Proxies ............................................................................................................. 11
Footprints .............................................................................................................................................. 12
The Merge Feature ....................................................................................................................... 13
Before Blasting ...................................................................................................................................... 14
Auto Approved Blogs ............................................................................................................................ 15
Do Follow or No Follow ..................................................................................................................... 15
High PR vs. Low PR ............................................................................................................................ 15
OBL (Out Bound Links) ...................................................................................................................... 15
Finding Auto Approved Blogs – Trial and Error Method ................................................................... 16
Finding Auto Approved Blogs – Steal Your Competitors Links ......................................................... 18
Finding Auto Approved Blogs – Finding Spammers Backlinks .......................................................... 20
Finding Auto Approved Blogs – Scrape All Pages from a Domain .................................................... 21
Trim, Crop Expand ............................................................................................................................. 22
High PR Blog Commenting .................................................................................................................... 23
Finding High PR Blogs– The easy method ......................................................................................... 23
Finding High PR Blogs – Sign up to post ............................................................................................ 24
Finding High PR Blogs – Scraping From Usernames .......................................................................... 24
Finding High PR Blogs– Generic Comment Scraping ......................................................................... 25
Finding Do-Follow Blogs ........................................................................................................................ 26
Finding Do-follow Blogs – Link Checker Method .............................................................................. 26
Finding Do-follow Blogs – Link Checker 2 ......................................................................................... 26
Finding Do-follow Blogs – Lists/resources/directories ..................................................................... 27
Do-follow Resources ..................................................................................................................... 27
Finding Do-follow Blogs – Comment Plugin ...................................................................................... 28
Finding Do-follow Blogs – iFollow ..................................................................................................... 29
Finding Zero (or Low) OBL Blogs ........................................................................................................... 30
Method 1 – The Boring Method ....................................................................................................... 30
Method 2 – No Comment Posted Yet – The Awesome Method ...................................................... 30
Get your Comments Approved ............................................................................................................. 31
Targeted Searches = Targeted Comments ........................................................................................ 31
Google News ..................................................................................................................................... 32
Spinning comments .......................................................................................................................... 32
Using Scrapebox Learn Feature ............................................................................................................ 33
Teaching Scrapebox Habari ............................................................................................................... 33
Finding High PR Forums ........................................................................................................................ 36
Finding Forums that Guarantee Links ............................................................................................... 37
Forums for Traffic ............................................................................................................................. 37
Indexing Techniques ............................................................................................................................. 38
RSS ..................................................................................................................................................... 38
Pinging ............................................................................................................................................... 39
Rapid Indexer – 40,000 Links ............................................................................................................ 40
Links From Directories .......................................................................................................................... 41
Scraping Emails with Scrapebox ........................................................................................................... 42
Extracting Emails from Classified Ads Sites ....................................................................................... 42
Extract Emails from Forums .............................................................................................................. 42
Scraping Free PLR Articles ..................................................................................................................... 43
Scraping Images .................................................................................................................................... 43
Avoiding the Box of Sand ...................................................................................................................... 45
Aged Domains ................................................................................................................................... 45
Where to get Aged domains ......................................................................................................... 45
New Domains .................................................................................................................................... 46
The Redirect Method .................................................................................................................... 46
The Web2.0 Method ..................................................................................................................... 46
Web 2.0 Nets/Webs ...................................................................................................................... 47
A Good Place to Start ............................................................................................................................ 48
How Best to use Scrapebox .............................................................................................................. 48
A Final Word from Me .......................................................................................................................... 49
Important links/Resources/Tools .......................................................................................................... 50
Guides ............................................................................................................................................... 50
Tools .................................................................................................................................................. 50
Aged domains ................................................................................................................................... 50
Find top 200 competing pages for a keyword .................................................................................. 50
Proxies ............................................................................................................................................... 50
Dofollowblogs lists/resources ........................................................................................................... 50
Dofollow directory/search engine .................................................................................................... 50
RSS ..................................................................................................................................................... 51
Pinging ............................................................................................................................................... 51
Forums .............................................................................................................................................. 51
Web 2.0 Sites ........................................................................................................................................ 51
Footprints Continued ............................................................................................................................ 52
Blogs .................................................................................................................................................. 53
Forums .............................................................................................................................................. 53
Directories ......................................................................................................................................... 59
Ping Mode ......................................................................................................................................... 59
.edu/.gov Blogs ................................................................................................................................. 60
.edu/.gov Forums .............................................................................................................................. 60
Email Harvesting ............................................................................................................................... 61
Comment Footprints ......................................................................................................................... 61
General edu (try these with .gov tld as well) .................................................................................... 62

Legal Stuff/Disclaimer
I suppose I have to get this out of the way.
This publication and all its contents is protected by the US Copyright Act of 1976 and all other
applicable international, federal, state and local laws and all rights are reserved including resale
rights. It is not allowed to give this product away or sell this to guide to anyone else. If you bought or
downloaded this publication from anyone other than Josh M (drummer05) on the backlink forum,
warrior forum, blueprint forum, or www.thescrapeboxmasteradvantage.com (or its partners), then
you have received a pirated copy. Please contact us via email at
[email protected] and notify us of the situation.
Also note that most of this publication is based on personal experience and reliable evidence.
Although I have made every reasonable attempt to reach complete accuracy of the content in this
guide, I assume no responsibility for errors or omissions on the part of the reader. Also you should
use this information as you see fit, and at your own risk. Your particular situation will probably not be
exactly as suited to examples illustrated in this guide, in fact its likely that they will not be the same
and you should adjust your use of the information and recommendations accordingly.
Any trademarks, service marks, product names, or named features are assumed to be the property of
their respective owners and are only used as a reference. There is no implied endorsement if we use
one of these terms.
Finally, think! Use your common sense, nothing in this guide is meant to replace your common sense
or natural trail of thought, medical or other professional advice, and is meant to inform as well as
entertain the reader.

Introduction
Dear Reader,
Firstly, thank you so much for purchasing the Ultimate Scrapebox Advantage. You have decided to
put your trust and faith in me and the methods that are in this guide, and you have made the right
decision. I know that the techniques, methods and ideas that are discussed here will enlighten you
and enrich your use of the Scrapebox tool in every aspect imaginable.
In this guide I will give a short introduction to the basics, what I think you should know, most
common add-ons and their uses, and other resources to help you as well as the merge feature and
proxies.
Then I get into the meat of the course and I start to discuss auto approved blogs. I talk about stealing
your competitor’s links in various different ways, and give you other ways of building auto approved
blogs too. Then I talk about high pr moderated blogs, and go into some detail with methods there as
well. Next is do-follow blogs, I talk about how to find them and all the information that should come
with finding them; I give some great resources and techniques here which you are going to love.
I talk about getting your comments approved, finding high pr forums, using scrapebox learn feature,
and other techniques like scraping images, plr articles, rss, indexing, and more. In the appendix
section I give all the resources I use, tools, websites, and most importantly, a list of custom
footprints for you to use.
It’s probably best for you to read this guide before putting anything into practice. However if you
would like to just skim through the guide and see what techniques you like or using the material in
the appendix section more beneficial, please do what suits you best.
Anyway, it’s time to get the introduction out of the way and start on the good stuff..... So let’s get to
it!
Cheers,
Josh M











Basics
Stuff you should already know
I don’t want to waste too much of your time going over the obvious stuff. There is literally a ton of
free information on scrapebox to get you started, and it would be a waste of both of our time to
include it in this e-book. This guide is meant to give you advanced backlinking methods, and
advanced scrapebox uses, and putting those two together to help your sites rank faster and
smoother.
If you don’t already know how to use scrapebox, then I urge you to follow these links, and watch and
learn. The tool can be learned in a few days, and mastered in a few weeks, and with this guide you
will have the ultimate advantage over anyone who only has these free resources.
Resources you need before Reading this guide
Official Forum
http://www.scrapeboxforum.com/index.php
Official User Guide
http://www.scrapebox.com/usage-guide/
This is the Official Youtube Channel
http://www.youtube.com/user/scrapebox/
Other Youtube resources that can help you
rintintindy
http://www.youtube.com/user/RinTinTindy

scrapeboxblueprint
http://www.youtube.com/user/ScrapeBoxBlueprint

I’m sure there is more out there; in fact I know that there is. However if you have any problems or
issues with scrapebox, I’m sure a Google search will tell you all you need to know, for the purpose of
this guide however, let’s get going already!










The Ultimate Scrapebox Advantage
Just before we start, I would like to mention that in any business you will need to invest money. For
example, you have just invested in a guide to teach you scrapebox methods. Within this guide I will
tell you where are good places to invest your money to help you along the way; however I will
always present a free option for you.
You do not need to spend any more money, but if you’re serious about Internet Marketing, which
I’m sure you are, then you should understand the importance of investing, so don’t get upset or
offended If I tell you that it’s a good idea for you to go and buy some tool, or service.
Don’t forget, there is usually a free version for everything, so don’t worry.....
...Let’s begin!
Scrapebox
Here is scrapebox broken down for you, it might look complicated if you’re looking at it for the first
time, but if you think about it in sections then it’s much easier to deal with. Scrapebox is split into 4
main areas, the harvester, the URL section, the proxy area and the comment poster.
The Tool




Your keywords or footprints go here
We will use Custom Footprint
Proxies down here
List of Harvested URL’s Here
Trim your List with these options
Input your info for commenting Here
Harvester URL’s
Proxies Comment Poster

Here is some more basic information on each area to help you understand what this tool can do.
Do not expect to be an expert after you read this, this is just meant as a reference and
introduction to the various features.







The Merge Feature
Clear Footprint
Put Keyword lists or Footprint Lists
Here
Scrape Keywords
Import a list

URL’s are Listed Here
Remove Duplicate url’s/domains
Trim all to root domain
Check pagerank of url’s
Check Indexed
Scrape emails from url’s
Import/Export URL list
Import/Export Url’s & PR
Start Harvesting
Perform Action with List
Proxies go Here (mine are blotted out)
Select which search engines to scrape from
Check proxies or Harvest Proxies
Select Method of Posting (Fast is Good for Auto
approved, slow gets a higher success rate but takes
longer, and manual is great for high pr relevant
postings)
Select either Ping mode, RSS, Trackbacks or Link
Checker
Information for the comments goes here
Start Posting

TIP: When you select check links, when you select it the names field, emails field, and
comments field will be greyed out. All you need to do is put the link you are checking for in
the websites area, and the list of the sites you are checking on.
Anchor Text and the Names Field
You can use the name generator to generate names for the name field, but this field is also used as
an anchor text field. This is because your name is usually used as the link to your site, so if you want
to have the right anchor text for your link you need to write them in the names field. This is usually a
problem for people, because if you are linking to several domains then you want to spin the anchor
text according to those domains. Also you don’t want to have to keep changing the names field.
So I have the solution, I always just fill the names field up with generated names, these can always
be the same every time it doesn’t matter. Then in the website field, where your links are going to go,
you write the text file like this,
• http://www.website1.com {keyword1|keyword2|keyword3}
• http://www.website2.com {keyword4|keyword5}

What this will do is spin the keywords in the brackets and use as anchor text for the preceding
website. This means you can link multiple sites with specific anchor texts, and it means you don’t
have to input your keywords every time into the names field or have loads of keyword text files
ready for the names field.
Settings Menu
Adjust Maximum Connection This is scrapebox’s ability to multitask,this is used for checking
multiple pr’s at once, making multiple comments at once, etc. I
recommend leaving it at default and then increasing it slowly over
time until you get to your perfect setting. If you have a slow
computer then you might want to try lowering the default a little
bit.
Adjust timeout settings Scrapebox has a timeout feature which tells it when to stop trying
and move on. The amount of time that this takes is decided by you
in this settings area. If you are on a fast connection then it
shouldn’t take as long to perform tasks, also if you are using private
proxies it should be faster as well, so the timeout should be shorter
too. If you happen to run into a lot of 404 or timeout errors, then
increase these values until you are at your perfect settings.
Post Only using Slow
Commenter
Use this if you’re getting a lot of errors with the fast commenter
option don’t use this for default. The slow commenter increases
your success rate, but decreases the speed of posting.
Use Multi Threaded Harvester This Speeds up your harvesting rate dramatically. It also eats up
more cpu than usual; this option is good if you have a fast
computer.
Adjust Multi Threaded
Harvester Proxy Retries
This is how many times scrapebox should try to connect using the
same proxy after it gets a 404 error. Default is good unless you’re
seeing a lot of errors, then you can lower this setting.
Fast Poster Connection
Balancing
The Fast Poster Connection Balancing splits your list in to 500 URL
batches internally, the connections go down to zero momentarily
after each 500 URL "burst' before the next 500 are posted to. This
gives Windows and the network a short break to process
outstanding messages etc and allows everything to free up.
It will slow down the comment session slightly, but can provide
more stability on some peoples systems.


Options Menu
Use Custom User Agents
Edit Custom User Agents
“User Agent” is a term that refers to a browser,
or application that accesses the web somehow.
When you are looking at a website, your user
agent string is logged by various statistical
programs that are installed on the server. So, if
you use the same user agent you could be
leaving a footprint.
Its best to select use custom user agents, and
then in the edit custom user agents section put
in various user agents you could use. Go here
and copy paste some http://www.user-
agents.org/index.shtml?moz.
Enable Crashdump Logging This is a great tool and very helpful for the long
run. Your scrapebox might crash, or sometimes
your computer might crash, or anything might
happen where the program has to close, and you
don’t want to lose your data, things you have
scraped or harvested etc. You need to install the
add-on, scrapebox crashdump logger, you can
then successfully enable this in the options
menu.
Setup Email Notification Server This is useful if you are running scrapebox in the
background and want to be notified on when it
finishes a task that takes a long time. This is also
good if you have scrapebox running on other
computers and you want to be notified when it’s
done.

The Add-Ons
The Add-ons are a great section of scrapebox, Before reading this guide, make sure you have
downloaded and installed every add-on, and read what each one is. There is short description in the
available add-ons window, and at this site -http://www.scrapebox.com/addons so I don’t need to
explain them here. There will be some add-ons that are used in some of my techniques, so you will
need to know what they are. There are only 23 add-ons and their name are all pretty self
explanatory, but if you don’t know what one does exactly, and the short description is not enough,
do a search for the specific addon.






An Important Word on Proxies
Proxies are really important when harvesting URL’s or posting with scrapebox, and the best proxies
you can get are private proxies. I would say that getting your own private proxies is an absolute must
for any scrapebox user, especially those that are avidly using it and since you have bought an
advanced guide, I am guessing you are one of those people.
There are Private proxies, shared proxies and free proxies. The free proxies that can be harvested
using scrapebox are great for a quick use, and if you use free proxies then make sure you scrape new
ones before every use of the tool.
Shared proxies are proxies that are shared between paid users, and private proxies are proxies that
are just dedicated to you.
Free Proxies are great for doing simple tasks like scraping in small amounts and checking pr so in that
respect I recommend scraping new free proxies every day. Also free proxies can help increasing the
speed of some of your tasks and minimizing your footprint. However getting private proxies or
shared proxies is important too because you will complete tasks much faster with better results. I
usually get 2 or 3 private proxies, 5-10 shared proxies, and 50-100 free proxies every day that I use
scrapebox. I get my private proxies at http://www.yourprivateproxy.com/413.html (affiliate link)
where they have cheap package deals on private or shared proxies.
To find free proxies you can use the scrapebox proxy harvester, and then test them to filter out the
bad ones. Here is a youtube video on proxy harvesting -
http://www.youtube.com/watch?v=xnDy4bEF7Mw















Footprints
You have to have a basic idea of what footprints are and how to use them with scrapebox as most of
the e-book is centred on the use of footprints. A footprint is basically a marking on a website that
makes it unique to other websites. For example every website that says .edu in the url, is going to be
an education/university website. If a website has the keywords “dog houses”, then that is a footprint
that tells us that the page is related to “dog houses”. In fact, any text in the entire html coding of a
website can be tracked as a footprint.
Here is an example. I searched in google with this footprint.
“powered by wordpress” “leave a reply” “dog training”.
This brings me all the websites that have those phrases in the html code of the website. Here are
some cut-outs of one of the resulting websites.




And here are portions of the source code that have the footprints highlighted.
<met a ht t p- equi v=" Cont ent - Type" cont ent =" t ext / ht ml ; char set =UTF- 8" /
<
>
met a ht t p- equi v=" X- UA- Compat i bl e" cont ent =" I E=Emul at eI E7" /
<
>
t i t l e>Chaar Dog Tr ai ni ng goes t o t he Al l ent own Pet Expo &#171; Chaar Dog
Tr ai ni ng</ t i t l e

>
<
<! - - Comment For m- - >
di v i d=" r espond"

>
<a name=" comment f or m" ></ a>
<
<! - - named anchor f or ski p l i nks
- - >
h3 cl ass=" r epl y" >Leave a Repl y</ h3

>

</ a> - Al l Ri ght s Reser ved<br / >Power ed by <a
hr ef =" ht t p: / / wor dpr ess. or g/ " >Wor dPr ess</ a

>
At the end of the book I will provide a much more detailed footprint list and you can use that to
scrape anything you want.
This is just a simple example of using keywords to scrape various sites, and there are infinite
possibilities here. However if you want to become a scrapebox master, you need to be aware of
some basic google operators that you can utilize to get the most out of your footprints.
Eventually, you are going to want to produce your own footprints, as these will always be the most
unique, and you can only do this with a basic knowledge of operators to start with, so here are the
operators that you can use.


Keyword “dog training” in the title which is perfect for our niche.
The keyword “leave a reply” was visible,
so we know commenting is allowed.
And of course, it’s a wordpress site, so we know scrapebox can post to it.
Search Operator Meaning
allinanchor:keyword The keyword must appear in the anchor text of links pointing to the page
inanchor:keyword Only the keyword following the operator must appear as an anchor text of a
link pointing towards the page
allintext:keyword All words must appear in the text of the pages
intext:keyword Only the keyword following the operator must appear in the text of the page
allintitle: All query words must appear in the title of the page
intitle: The terms must appear in the title of the page.
allinurl: All query words must appear in the URL.
inurl: The terms must appear in the URL of the page.
inurl:.xxx Searches a site with the domain suffix “xxx”
“keyword” Searches for pages that have your “keyword” in the page.
-“keyword” Shows pages that does NOT have that “keyword” in the page
“keyword phrase” Find pages with that exact keyword phrase
link:www.site.com Find linked pages, i.e show pages that point to the URL
site:www.site.com Search only one website or domain.

These are the standard search engine operators, and I will be using them later when we talk about
more complex footprints to use with scrapebox operators.
Also these common operators can be included with any other footprints I give you, you find in the
appendix, or you just pick up. These can be combined, messed around, and used in many varying
different ways. I will continue from the example I used earlier, which is already using three keyword
phrases. Let’s see what we can add to that.
I could make the footprint,
• Inurl:.com intitle:dog training “powered by wordpress” “leave a reply” “dog training
That will scrape all the pages indexed by google that are .com, have “dog training” in the title, are on
the platform wordpress, and give you the option to leave a reply.
I will go through most of the footprints when I teach you the techniques and methods of various
scraping so I won’t go into too much detail here, but you should always come back to this section to
see how you can drill down to find what you want with more targeted operators and so you get used
to using them regularly.
Also, I provide a massive list of footprints for you to use at the end of this e-book in the appendix
section, so don’t think you are going to have to find all the footprints yourself. Most of the work is
already done for you.
The Merge Feature
This feature isn’t really spoken about much in tutorials that I have found on the net, so I will explain
it here and then give you all the resources you need in order to use the merge feature.
Basically, you have your footprints in a text file, with the scrapebox operator %KW% in it
somewhere. (e.g “powered by wordpress” %KW%). Then you load up keywords in the keyword area
of scrapebox, or scrape keywords, and when you have your keywords, you hit the merge button.
Then load up the text file with the footprint in it with %KW% in the footprint, and the footprint will
be merged to all of the keywords.
You can do this with several footprints in one file, and several keywords. The more combinations of
keywords and footprints, the more results you will get.


Think of the possibilities here, if you have 10 footprints in the text file, and you have 100
keywords, then you can merge 1000 unique footprints, which will all scrape hundreds if not
thousands of results each.
Also you will be able to reuse the text files with your footprint in it with future niches/keywords that
you would like to scrape for.
One more thing, and this is a special treat for you guys, I have attached with this course text files
that are full of footprints, that are all ready for the merge feature.... awesome, I know!
So all you need to do now is input your keywords, and then merge in one or more of the text files
for, forums, blogs, ping sites, edu, gov etc. and the correct footprints will merge with your keywords
instantly.
Before Blasting
Before you go out and blast your site with thousands of links using my methods, make sure you read
through all of them, and get a better understanding of the whole process. If you run through it and
then come back and follow through with the information then you will learn much faster than by
trying each method as you read it.
Also I have included in the “After Thoughts” section a portion on avoiding the sandbox, and how you
can set up your linking structures so that no blast of links it too much. Here I discuss backlinking
methods like web2.0, link wheels, redirects, aged domains etc.





This is what your text file might look like
Load up your keywords here, or scrape them
Then hit merge, and load the text file with
the footprint(s) in it.
Auto Approved Blogs
Here are the methods I use to build auto approved blogs lists. These methods can be mixed up,
turned inside out, and used at your enjoyment.
A couple of things first; when building a big auto approved blogs list, having paid proxies is almost
always a must. They will make the whole process faster, easier and you won’t find yourself throwing
your keyboard through your monitor because of a 404 error flashing on the screen. Also, I want to go
over a few things before going into the details of creating an auto approved blogs list; do-follow, or
no-follow, high PR vs. Low PR, and OBL.
Do Follow or No Follow
This is a question that I have seen posed in a lot of places, some people say that no-follow is a waste
of time and useless to backlinking, and some say that both are important. Firstly, let’s get a few
things straight. No-follow links will not help boost your rankings, since google does not consider
them to be a contributing factor. There are a couple of exceptions that I think should be mentioned.
Yahoo answers and Wikipedia are no-follow, but links from those sites are almost certainly
monitored and given value to by google. However the thousands of other no-follow sources will not
help your rank directly.
Even so, No-follow backlinks are STILL important. Since we are trying to be organic and natural in our
link building, we should always be looking to diversify our backlinking as I’m sure many of you know.
Getting no-follow links is just another way of diversifying your backlinking and your footprint, and in
fact there isn’t a better way to do it. If you think about it, a website that only has links on do-follow
is a lot more suspicious than one that has a mix of both. So in actual fact, even though no-follow
links may not help you in the ranks, the fact that they increase your footprint and diversify your
links, they indirectly help your rankings by making your other links stronger.
So no-follow is still important, and I would recommend getting around 15%-25% no-follow links to
mix up your footprint and your backlinking when trying to gain authority and rankings to a site.
High PR vs. Low PR
Quality versus quantity, get more of pr0’s, and pr1’s, and few of pr3’s 4’s and 5’s. The simple reason
is that a high PR link contains a lot more “juice” than low pr links, and you do not need as many of
them to get to the same place. Also if you get many high pr links, but few low pr links then you are
not diversifying your footprint evenly enough and your backlinking will not look natural and organic.
One high pr link (pr3/pr4) a week is good for a new domain, however you should only do that much
if you are building consistent lower pr links. Don’t forget about no-follow as well.
OBL (Out Bound Links)
The amount of outbound links on a page is seen as very important to some people and not such a
bother for others. I don’t think anyone really knows how much link juice gets diluted by the amount
of out bound links on a page, and I definitely don’t know either.
I think the smaller amount of OBL on a page, the better. However if you are deciding which blog to
comment on where one has 500 OBL, and one has 1500 OBL, it won’t really make a difference. If you
are commenting on pages with less than 50 OBL, then that is a noticeable difference to think about.
Again I think you should try to diversify, obviously the stronger links with low OBL you get more of in
this case, just like no-follow and do-follow, but blasting your site with links on pages with a very high
OBL can create some great link power when they are all added up.
Also you have to consider that any auto approved blog you post to, whether it has 50 OBL, or 10
OBL, can still be spammed to death in the future, meaning your comment will be lost in a sea of
comments making your Low OBL quality link, a rubbish spammed link on a high OBL page.
Don’t forget we are only talking about posting on Auto Approved blogs now. Later on we will talk
about getting quality blog comment links that will stick for a long time and not get spammed.


Finding Auto Approved Blogs – Trial and Error Method
This is the common method, but not the best. I will go through it here, and you can decide if it’s
good for you or not.
You want to start with using a custom footprint targeting wordpress, blog engine or movable type
blogs (a full list of custom footprints is in the appendix section) I find that using a custom footprint
brings more results than just ticking the different blog platform options, but you can try both and
decide for yourself.
Load a list of keywords in your niche, or scrape some, and then start harvesting.

You have a few options here; you can load up your keywords and merge some footprints to them.
You can load up your keywords and select a blog platform to scrape from. You can load up your
footprints and then input a custom footprint into the footprint area. Or you can hand write all the
footprints with the keywords in the keyword area (which is what merging will do)
Either way, you should end up with a few footprints to be scraping for. Let’s say your niche is “dog
training”, here could be a few possible footprints
• “powered by wordpress” “dog training”
• “powered by wordpress” “leave a reply” “dog training”
• “powered by blogengine” “dog training”
• “powered by Movable type” “dog training
Insert Keywords /footprints Here
Insert Footprint Here
Click “Start Harvesting”
After the Harvester has finished, delete duplicate URL’s. You can keep harvesting new keywords, or
keywords that weren’t completed if you would like to keep expanding your list.
After you are happy with a nice long list of domains with a mix of keywords and platforms you need
to do a test blast to the domains with a fake website, for example www.ls089jsdn-90nsdf.com or you
can use a sandboxed domain and hopefully help it out of the sandbox. Fill up the names field with
generated names, and emails with the generated emails. Make some random comments, they don’t
need to be anything special, and then post using the fast poster.

After the post is complete export all the posted ones to a text file, posted.txt, and all the failed to
failed.txt.

Next, load up the failed ones in the blogs to comment on, and do the blast again.
Export Posted to file (we will save
these for testing)
Export Failed to File (we will re post
to these and do this again)
Remove Duplicate
Domains and URL’s
Run a Blast with a Test
Domain and then export
posted and failed, Submit
to failed again, and repeat.
Check Links, All found links are
auto approved!

Keep repeating the last few steps until you are exporting only failed one, or very few posted ones.
Make sure you save the posted ones as posted1.txt, posted2.txt, posted3.txt etc. However with the
failed ones, you can keep saving over the failed.txt, because you won’t need them later.
Next, import and replace the list with all the posted links gathered into one .txt file. Transfer to blogs
list and tick the “check links” checkbox.
All the links that are found are on auto approved blogs, save this list and you now have an auto
approved blogs list.
Finding Auto Approved Blogs – Steal Your Competitors Links
In this method we look to find all auto approved backlinks that your competitors have got and get
links on the same blogs.
First you have to visit http://seoserp.com/google_page_rank/1000_SERP.asp enter local google, or
google.com (global) and enter your domain, your keyword, and hit “SERP”. Scroll down and hit “see
more” to get up to 200 of your competitors.


Copy this list into notepad++ (you can download it free, check appendix), select all text, then hit
Menu > TextFX Tools > Delete Line Numbers or First Word, and repeat to remove the colon.

Open the scrapebox addon, the backlink checker and load all your URL’s. Hit start; When it has found
all your competitors backlinks, download all to save the file containing your backlinks.

Open up the add-on blog analyzer and load your list, hit start, and when it’s done filter out all bad
blogs and captcha (if you don’t have decapcha).
Save the list to the scrapebox harvester, and do a test blast with a fake domain like we did before (or
you can use a sandboxed domain) and then follow the steps in the last method to check the links to
find auto approved blogs.
Select all the text, delete line
numbers or first word. Do this again
to remove the colon.

Load url’s from file, harvester, or blogs list
Start gathering backlinks
Download all to a file
Export posted and failed ones to separate txt files, and load up failed ones and blast again. Keep
doing this until you only have failed ones. Make sure all your successes are saved as separate txt
files, success1.txt, success2.txt etc.
Gather all the success posts together and bring back to scrapebox to check for links. Use the link
checker and check for the fake domain you made up, the ones that it found are all auto approved.
That’s the second method to building your auto approved blog list.
To take this a tiny step further, when you are looking for your competitors, try a few keywords in
your niche, so you get different competitors, and your auto approved list will be bigger
Finding Auto Approved Blogs – Finding Spammers Backlinks
This is another good best method to finding auto approved blogs lists. Finding Spammers auto
approved blogs list by reverse engineering the blogs they have posted on. There is also a similar
method to find forum spammers which I mention later on.
First, you need a heavily spammed blog, you can use your own as a honeypot blog, and collect up
spam website, or you can use the massive auto approved list that I gave you. Once you have one you
will know it by the hundreds or thousands of comments that have been built here by people just like
you building auto approved blogs and posting to them.
What we know is that probably 90% of spammers, who auto posted on this blog, were auto posting
on other blogs at the same time, so all we have to do is find their backlinks and we can trim down to
the auto approved blogs.
Let’s use this spammed blog as an example.... http://www.darkfinger.com/post/Librarians-bare-it-
all-for-charity-Please-DONT.aspx
The first thing to do is open the page source by right clicking on the page in your web browser and
clicking view source. Search the source by hitting Ctrl+F on your keyboard, and enter in <div
class=”comment”> in the search bar. This will bring you straight to the comment section, where
hundreds or thousands of people have spammed their links. Highlight and copy everything below
the line <div classs=”comment”> and paste into a notepad++. Then Replace “<a href=” with afew
spaces, and replace “>” with a few spaces as well. This will isolate the links and get rid of the html
around them.
Then load up the file into a link extractor tool like www.spadixbd.com/elink/ and extract all the links
from the text document. This software costs a few bucks, but the alternate version is to find free
software, or manually go through the file and copy paste all the sites of the spammers, which is what
I used to do. Manually selecting about 20 websites can find you several thousand auto approved
blogs.
Now you have done it either manually or with a tool and now you have a big list of all the spammer’s
sites in a text document, now you need to load them up into the scrapebox harvester, and use the
scrapebox addon “backlink checker” like you did for the competition.
Load the spammers URL’s, hit start, and download all the backlinks to a txt file once it has finished.
Import the list into the blog analyzer, and then filter out all the bad ones and the captcha ones (if
you don’t have a decapcha account)
Then do a test blast to all the good blogs with a fake website or a sandboxed site, export posted
ones, and export failed ones. Then load up failed ones and try again, keep repeating this step until
you only have failed ones. Save all the successes as success1.txt, success2.txt for every time you
reload the failed ones, however you can just save over failed.txt because you won’t need the old
failed ones.
Once you have gone through the failed one until there are only failed or until you can’t be bothered
anymore, group together all the successes and load up into the link checker so you can check your
test links.
Hit start on the link checker and all links that were posted and come back as a success are auto
approved blogs
This method can be expanded by using many spammers’ sites, on many different blogs, and then
scraping all the backlinks and checking all of them. You can use the merge function and the special
merge ready files that came with this course, or create a merge file yourself.
Finding Auto Approved Blogs – Scrape All Pages from a Domain
A great way to build your auto approved blogs list even bigger is to scrape all the blog pages from a
single auto approved domain. Most likely all the pages from an auto approved blog are also on auto
approve, so now you just need to get all of them. This is very easy to do.
If your auto approved blog is http://www.autoapprovedblog.com, you replace http with site:http
and then copy the “site:http://www............” into the keyword list. Leave the custom footprint area
empty while you do this, and hit harvest. You want to do this with a POST page that you have
confirmed is auto approved.
For example use this auto approved blog, http://sheknows.com/blogs/alytude/?p=3383 if you now
search google, or scrapebox, with the custom footprint
• site:http://sheknows.com/blogs/alytude/?p=3383
A list of all the pages from the site will show. This footprint will give you around 10,000 auto
approved URL’s from this one blog. I have included with my guide a list of 5,000 UNIQUE auto
approved blogs, if you can scrape 10,000 pages from 1 blog, you can scrape 50,000,000 urls from the
list I have given you.
I’m not saying that it will be that much, but it’s a start for you at least.... I recommend scraping as
many High pr blogs, scraping those blogs to get all the inner pages, and then trim again based on pr.
Even with all that trimming, you will still get a very nice list that you can blast at any time.
Tip: You can do this on a massive list of auto approved blogs, just load into notepad++ and replace
“http”, with “site:http” and they will all get changed, then load that list into the keyword section
and follow the same steps. Or use the merge feature with the correct file.


Tip: If you only have the top level domain of a blog that you want all the pages you can post on,
then use the footprint site:http://www.topleveldomain.com “leave a reply”.
You must make sure that at least one post page has the footprint “leave a reply” on it, and you
know others pages you can comment on will also have those words. It might be “leave a
comment”, or “Post a comment” or a few possible other variations, so this step of checking is very
important.


Trim, Crop Expand
Now that you have several methods, you can go and build auto approved lists as large as you want
using these methods repeatedly to make your list bigger and bigger.
You can trim your list by the PR, the amount of Outbound Links, or whether or not it’s do-follow.
Here is my method summed up. I use all the methods of building a unique auto approved list, and
when I have several thousand unique auto approved blogs, I scrape all the pages from the high pr
domains. Then I check pr of those inner pages and trim to what I want.
Now I have a list of just High PR auto approved blogs that I can blast to at any time. It’s a good idea
to use these methods every week or month depending on how often you do scrapebox blasts, and
keep building your auto approved blogs list.


Load up good blogs here in the format
site:http://www.highprblogs.com/postpage....
Then Start Harvesting to get all the inner pages

Then check PR and trim to the specs you
want
High PR Blog Commenting
Now that you have all the methods on auto approved blogs, you can go out and get what I call
“quantity links” whenever you want. These are the less valuable links that you build lots of. However
those “quality links” that you need to build as well are also found by scrapebox. These methods are
more whitehat, as you are commenting on moderated blogs, and also these methods are great for
new sites as well.
To some, the scrapebox tool seems like a very spammy tool, where you can post thousands of links
to people’s blogs in massive numbers, and in some ways they are right. That is a part of the tool;
however scrapebox is so much more than that.
The best way, I think, to use scrapebox, is by scraping high PR relevant blogs, and manually
commenting, or auto commenting with well spun comments (we will discuss that later) and getting
quality links to your site.
You want to find High PR blogs that have very few OBL, and the blog should be moderated. (No auto
approved... that was last chapter  )
I will discuss a few of methods of finding high PR backlinks and also commenting so that your
comments get approved!
Finding High PR Blogs– The easy method
What we are doing for this method is finding niche related High PR blogs, PR3 or up, and then
manually leaving a comment pointing back to your site, web2.0, .301 redirect domain or any of the
other sandbox safe methods. You don’t have to do it the safe way because this isn’t link blasting like
with auto approved, however if you want to be extra safe, then use a sandbox safe method.
Let’s say your niche is the presidents of America, and you want to build links to your Obama page
with relevant links pointing to your site.
Type into the scrapebox keyword area,
• intitle: barack obama
• inurl: barack obama
Now you can start to harvest using whatever footprint you want, wordpress, movable type or blog
engine, there is a huge list of custom footprints in the appendix section for you to use, so make sure
you have a look.
Save all the harvested URL’s into a big list, and remove duplicate url’s and duplicate domains. Then
check pagerank, and remove anything below PR3.
If you want you can check for do-follow with the do-follow addon, or with a method I discuss very
soon about finding do-follow blogs in the “Finding Do-follow Blogs”. You can also check OBL at this
point if you want.
Now prepare some relevant comments about barack obama to post manually on these blogs.
You are writing a comment like “I like president obama because blah blah blah, and I like the points
you made here about him” and since you’re on a page that has barack obama in the url, or in the
title, then your guaranteed to be very targeted and your comment is much more likely to get
approved.
The comment section will explain in more detail how to write comments, and how to spin comments
that are relevant and will get accepted. But for now, let’s have a look at another method of finding
High PR blogs to comment on
Finding High PR Blogs – Sign up to post
There are some blogs that require you to sign up to post, some of these are auto approved once you
have signed up so they are very valuable for your backlinking efforts. Also there are usually less
spam comment and high OBL pages on these blogs, so you are getting great links.
Have a look at this post on a blog you have to sign up for...

So we can use that footprint “login or register to post comments”, to find other blogs that also
require you to login or register.
Scrape with a footprint like this, you can use other blog platforms too.
“powered by wordpress” “login or register to post comments”
Make sure you check these because there will be a few that won’t be exactly what you’re looking
for. You can use the blog analyzer to check, or do it manually.

Finding High PR Blogs – Scraping From Usernames
This is a method that kind of trails on from the last one and is a great way of getting very nice high
PR blog posts, and if you use the tips I give in the commenting section, then your posts will get
approved very often.
First things first, you have to find a high PR blog that you have to sign up for an account in order to
post. You can do this with the previous methods of scraping high pr blogs, or you can use custom
footprints that are in the appendix to find some great high pr blogs. When you have found a blog,
look to see what the usernames of the people commenting, let’s say its Winspire.



Your footprint that you will use should be “Submitted by Winspire”, the other possible combinations
are “Winspire says”, or “by Winspire”, I’m sure there are other variations, but you will pick them up
the more blogs you end up visiting.
Use the above footprints as keywords for harvesting, and choose custom footprint but leave the
harvester field blank. So your footprint for this technique will be,
• “submitted by username”
• “posted by username”
• “username says”
• “by username”
• ...other possible variations.
Copy this username
Here is the footprint we can use to find more of these...
Try to scrape get as many usernames as possible, so you can get more results you can type all this in
manually, or you can use the merge files that came with this e-book. Just load up your keywords and
merge the username method text file.
When you finished harvesting you will have a list of websites where these users have signed up to
blogs with the same username and posted a comment.
Filter out the high PR ones, (most of them will be High PR usually) and now you have a big list of High
PR blogs that you can sign up to and manually post your relevant comment to get a great backlink to
your site, web2.0 page, or redirect.
Since this method is for very high pr links on relevant pages, you don’t have to post many, these are
what I call “quality links”. You don’t need many to produce a strong effect.

Finding High PR Blogs– Generic Comment Scraping
This method is good to find auto approved blogs as well, but we are mainly focusing on High PR
ones. This advanced methods works on the basis that when people are doing manual commenting or
auto commenting they are using generic comments. First of all don’t do these yourselves because
you are giving yourself a footprint.
You can use these people’s comments as a footprint to find all the blogs that they have posted on
either manually or auto.
Just find a blog that has generic comments on it. Then put that generic comment in quotes and
harvest all the sites that have that comment in. You can do this by putting a list of the generic
comments in quotes into the keyword area, and then hitting harvest.
If you can’t find any blogs with generic comments, I have a big list of generic comments that you can
use in the appendix section.



This technique is great for finding blogs that you have to login into to post, but are auto approved
and allow links in the body of the comment.



This is a generic comment
Finding Do-Follow Blogs
Firstly, it’s near impossible to scrape only do-follow blogs, and this is because the do-follow attribute
does not apply to websites and pages, it applies to individual links. The Scrapebox addon checks how
many links are do-follow, how many are no-follow and then determine if the page is do-follow or not
depending on the balance. Also the scrapebox addon only checks wordpress sites, and is not even
100% reliable with that either.
I am going to provide a couple of ways to check a webpage for the do-follow status without using
the scrapebox addon that work on all blog platforms, and you can determine if these techniques
make life easier for you or not.
A couple more things first; Firstly, about 97% of blogs are NO-follow.... so if you are finding more
than 3% of your scraped domains turn out to be do-follow, then you are doing very well! Secondly, if
you are scraping spammers links, competitors links, or comment posters links, then you are much
more likely to run into higher pr do-follow blogs, because they have already done the scraping and
narrowing down for you and it’s likely that they were looking for the do-follow attribute (this applies
more with the High PR blog scraping methods)
Finding Do-follow Blogs – Link Checker Method
We are going to use the scrapebox link checker to see if a site is do-follow or not, and the thing
about the scrapebox link checker is that it is actually reading the entire html code of a website and
searching for a specific code that you have inputed, in this case we are looking for the no-follow
attribute.
After you have harvested your list of blogs, or if you have a list already, hit the check links button
and import your url list into the link checker. Then edit the websites file and write the following into
the text file.
• rel=”nofollow”
• rel=’nofollow’



Now the links checker will check for any no-follow attributes, and you want to keep all the blogs
where scrapebox doesn’t find anything (i.e failed ones), as you know there are no no-follow
attributes on that page.
Finding Do-follow Blogs – Link Checker 2
This is very similar to the previous method, and like the previous method also has its cons. This
method is useful for checking the specific link that you built is do-follow or not, rather than just the
links on the page.
Select the “check links” button
In the websites field, input the
no-follow tags mentioned above
In the blogs list field, input the
blogs you are checking for the
nofollow tag
Now, Check Links
Post on a list of harvested blogs with a fake domain, then go back and check links and use this code
in the websites file.
• “url of the fake domain you used” rel=”nofollow”
• rel=”nofollow” href=”url of the fake domain you used”
• ‘url of the fake domain you used’ rel=’nofollow’
• rel=’nofollow’ href=’url of the fake domain you used’
Again, all the failed ones are do-follow, however even though this method isn’t 100%, its better than
using the scrapebox add-on as it’s more accurate and you can check sites other than wordpress. Use
these methods to help find more do-follow blogs, and get better links back to your sites, web2.0
properties, or redirects.
Finding Do-follow Blogs – Lists/resources/directories
Ok, now that I have your attention about do-follow, I want to discuss it properly for a minute. Stop
thinking about scrapebox for the moment, and just listen up. Even with an incredible tool like
scrapebox, finding a do-follow blog to comment on is very challenging, however we do know the
importance of do-follow, and it will cover about 75% of our link building effort, so it’s worth our time
to find these do-follow blogs.
Besides the obvious backlinking benefit of getting do-follow links, you have to realise that there is a
massive traffic potential as well, if you’re getting links on high pr blogs, with relevant comments and
these blogs are getting views, then your comments are getting views, and that means that there is a
potential for traffic to your site.
So I am going to discuss finding do-follow blogs without scrapebox, but then using scrapebox to
expand our list to its massive potential, and having all the do-follow blog URL’s we will ever need....
excited yet?
Do-follow Resources
A couple of lists....
• WhyDoWork
• MySEOBlog
• Nicusor
• jimkarter
And a directory or two for do-follow blogs,
• FollowList
• BigFootMarketing
• Blogs that follow
• Dofollow
And a couple of do-follow search engines
• CommentHunt
• inLineSEO
• w3ec
Ok, now there is a lot there, but before you check these out, keep reading. I have more to say about
do-follow, and all these resources will get repeated in the appendix section. First of all, not all these
sites will be do-follow as no one can moderate a list this large perfectly; however most of them will
be do-follow.
If you find a blog you want to comment on, make sure you check its do-follow by using the seo for
firefox plugin, no-follow links are shown in red. You can also get a plugin called nodofollow, which I
use and I think is a bit better.
Download NoDoFollow here https://addons.mozilla.org/en-US/firefox/addon/nodofollow/
Ok, so you go to one of the do-follow resources above (or in the appendix) to find do-follow blogs to
comment on, then you check to see if the blog is indeed do-follow by confirming with the
nodofollow plugin.
Then you scrape all the pages on that domain using the custom footprint “site:dofollowblog.com”
(without quotes), then you run the blog analyzer tool to see if you can comment on any of the pages.
You have now built a do-follow blogs list that you can use to get targeted moderated do-follow
backlinks to your site at any time.
To make this method even more powerful, load up a big list of sites that are confirmed do-follow
into the scrapebox keyword area, and scrape all the pages from all the sites... then run blog analyzer
to build your massive list.
Finding Do-follow Blogs – Comment Plugin
This method uses the idea that people use a comment plugin that encourages users to post their
comments, and lots of these are do-follow.
The most common plugins are CommentLuv, KeywordLuv, TopCommentators, RecentComments etc.
The KeywordLuv plugin is my favourite because the links that you are allowed to place in your
post are do-follow, and you can use any anchor text you like.
Here is the footprint to use to find these blogs,
For keywordluv,
• “@YourKeywords” “input your niche here” “This site uses KeywordLuv”
Do not put your keywords where it says “@yourkeywords”.... that is just part of the footprint,
here is an example.


For comentluv use the footprint,
• “Enable CommentLuv” yourniche
Here is how I got the footprint
For Top Commentators, here are some optional footprints.
• Top commenter yourniche
• Top commenters yourniche
• Top commentators yourniche
• Top commentors yourniche
Unlike KeywordLuv, you will need to check the others if they are do-follow. A simple double check
with the donofollow plugin is enough, and it’s really quick and easy to use.
Tip: The only footprint I ever use is keywordluv, because its do-follow, you can have links in the
post, and there are enough to last me a lifetime. Also always scrape all the pages from a good blog
that you find.

Finding Do-follow Blogs – iFollow
Here is the last method I will mention, and then I will leave do-follow alone, I think I have spoken
about it enough.
We are going to be using google images to find blogs that have an image that say that their blog is
do-follow
Search google images for any of these strings below to find blogs that have the do-follow attribute,
however always double check with the nfollow plugin.
• ifollowblue.gif
• ifollowgreen.gif
• ifollowpink.gif
• ifollowpurple.gif
• ifollowltgreen.gif
• ifolloworange.gif
• ifollowwhite.gif
• ifollowmagenta.gif
• ifollow.(gif/png/jpg, take your pick)
• utrackback_ifollow.gif
• ifollow-red.png
• inurl:ucomment
• inurl:ifollow
Whenever you find a domain that is do-follow, scrape all the pages with scrapebox and trim down
with blog analyzer.



Finding Zero (or Low) OBL Blogs
Method 1 – The Boring Method
In order to use this technique, you will need to have already found blogs that are do-follow, or auto
approved. Usually the blogs you have are spammed to death; however you can use this method to
find all the pages on that site that you can post on and usually many of these pages will have zero or
low OBL, and will also be do-follow.
Scrape all the pages of a domain using the site:http://www......... footprint, either with the top level
domain and then using blog analyzer to determine if you can comment, or by using a post URL, and
scraping all the other posts like I have done in previous methods.
You can now use the outbound link checker in scrapebox to remove all the high OBL pages that you
scraped.
This isn’t a foolproof method, but it has helped me a lot in building my lists, and finding auto
approved do-follow blogs..... And it can help you too.
Method 2 – No Comment Posted Yet – The Awesome Method
This method is using a footprint on a page that has no comments posted yet, so we have to find
some text that would be the same on every page that has no comments. Here is what I found on one
site.

And another site too,

And I have noted a couple of other example you can use as well.
• “no comments posted yet”
• “why not be the first to have your say”
• “there are currently no comments for this post”
So you can create the footprints based around this footprint. Here are a couple of examples.
Finds wordpress blogs that have no comments
• “powered by wordpress” “no comments posted yet”
Finds all the pages on www.autoapprovedblog.com that has no comments yet.
• site:http://www.autoapprovedblog.com “no comments posted yet”
Here is the footprint you need...

Also try these footprints to find blogs with only 1 comment,

• "Comments: 1"
• "1 comment so far"
• "Comments(1)"
There are probably way more footprints out there, but I don’t know them all. If you ever come
across blog pages with zero OBL, or one OBL, then check to see if there are any noticeable
footprints, or check across a few pages to see if there is any similarities. If you find any, then scrape
using that footprint and see what you come across.
Get your Comments Approved
First of all you want to avoid writing generic comments. As you have seen before, using generic
comments leaves behind a footprint that not only google can find, but other scrapebox users can
find and then harvest all of your comments. So stop doing it!!!
There is an exception however, and this is when you are auto posting to auto approved blogs, this is
when you can use generic comments. To make them as unique as possible you will be spinning loads
of comments and that will minimize your footprint somewhat. I have included some spun comments
in a text file to be used ONLY for auto approved blogs.
Blogs that are moderated, or high pr blogs, or blogs that you have to sign up for will take a bit more
focus and thought when commenting. I am going to go through a few techniques below that will
help you to get your comments approved way more often.
Targeted Searches = Targeted Comments
First of all, you want to be scraping in the right ways using the search engine operators to target
your search. You want to be looking for a popular topic of discussion, and drill down to a specific
search, and then comment using related targeted material. Here is an example,
• intitle:”best 3” “weight loss tips”
Now it’s going to find a bunch of sites related to the best 3 weight loss tips. If you want both terms in
the title then put “intitle:” before both terms. Use this footprint to scrape blogs by selecting a blog
platform, or by adding a blog footprint.
Since you have scraped a targeted list of blogs, with a specific subject and you know what’s in the
title and you know what’s in the blog post, you can be sure to write great comments. Here are some
examples,
“Thanks for those great weight loss tips, I have been trying to lose weight for a while now, but didn’t
know which direction to go. You have really helped thanks again!”
“wow, awesome tips, thanks a lot., I have been having problems for a while now and your post has
put me on the right path. I will be sending this to a few of my friends who I think will like it too!”
This is the Footprint
Get the idea? These posts get accepted.... A LOT!
There are almost infinite variations that you can think of to get loads of targeted blogs and then post
relevant comments to those blogs
Here are some more to keep you going for a while,
• Intitle:”top ten” “your niche”
• Intitle:”top 3” “your niche”
• Intitle:”my favourite” “your niche”
• Intitle:”my favorite” “your niche”
• Intitle:”top books” “your niche”
Like I said, there are probably hundreds if not thousands of ideas and strings that you could come up
with, and that’s for you to figure out in your own niche or with your own ideas. As long as its
targeted, and your comment is targeted, then your good to go.


Google News
This is a way to help find topics of discussion that you could then go and scrape blogs for, that you
could comment on with a targeted and relevant comment, using... Google News!
You go on to google news and find a recent story that looks like it belongs to a certain niche, let’s use
weight loss for example again. Let’s say someone who is a famous dietician died.
Now you go to scrapebox and harvest blogs using the footprint intitle:”famous dietician” Then you
can leave a comment saying “I heard about his/her death, I’m so upset about the loss. My
condolences” .
That was just an example but you can do this for any news post that you happen to come across, and
this is a really great way of creating relevant comments that will get accepted because not only is it
targeted, but it is relevant news about the post, blog owners love this as it brings real news content
to their pages... and you get a free link from it.
Spinning comments
Always spin your comments for maximum uniqueness, I use the best spinner and it is the best
spinning software by far. I have included a link in the appendix, but I will give you a link here as well.
You get a trial for $7 so you can test it out. This is an affiliate link....
Here is the link: http://www.ultimatescrapeboxadvantage.com/thebestspinner.html
There are video tutorials and plenty of YouTube vids for you to feast on, and I would say that you
can learn the tool in about a day, and be a pro in about a week.
If you are on a tight budget you can get SpinnerChief.com for free
http://www.spinnerchief.com/soft.aspx?id=S16298 (aff)
If you use both methods above to scrape targeted blogs, then spin relevant targeted comments,
you will see your approval rate shooting up, and you will get loads of awesome backlinks.
Using Scrapebox Learn Feature
This is a great new addition to scrapebox, and it’s not spoken about much. This feature is incredible
and has massive potential. I will give you an example on how to use it and then you will be able to
use it for other things too.
The Learn feature in scrapebox is used to teach the tool how to post to platforms that it hasn’t
posted to before and doesn’t know how. And the way it works is by going to these platforms
manually and teaching scrapebox which field means what, and what to post into each one.
You have to repeat the process with a few blogs until scrapebox can “learn” that platform. So I will
do this with one example, and if you find other sites that you want scrapebox to “learn”, just use the
same techniques to teach it.
Teaching Scrapebox Habari
So as an example of the technique, let’s do this with Habari blogs, as they are quite popular, very
similar to wordpress, and scrapebox doesn’t know how to post to them.
First of all, let’s scrape some habari platform sites using this footprint
• "powered by Habari" "leave a reply" -"wordpress"
You can include a keyword in that footprint as well if you would like, but for the moment, let’s just
scrape some habari blogs.
Once you have a list, you will load up the list into the scrapebox manual poster, and also make sure
you have the emails file, comments file, names file, and websites file ready.

It doesn’t really matter what is in the names, emails, websites, and comments, because we are not
actually posting, we are just training scrapebox to learn the habari platform.
Scrape Habari Blogs using the footprint
Select Manual Poster
Transfer URL’s to Blogs list for commenter
Make sure these guys are filled in
Hit Start posting, and the manual posting window will pop up.
Then load up the first site on the list, and click “Learn” and another window will pop up with the first
site showing.

Then click on the Name field, and a box will appear which says “define field type”, you want to click
the Name field.

Do the same for the email field, the website field, and the comment field and the submit button.
Then name the blog platform “habari”.
Select the first one on the list
Click “Learn”
1. Click the area you want to
teach to scrapebox
2. Then select the correct field
3. Do the same with the, Name
field, the email field, the
website field, the comment
field, and the submit button.
4. Then name the blog type “Habari”
Then hit done, and move to the next website on the list. You have started to teach scrapebox the
habari platform.
It will take a few blogs until scrapebox has fully learned the platform. Load up the second site on
your list and you might see that one or some of the form areas have been filled in. If they are all
filled in then scrapebox has learned the platform, and will not let you click the learn button. If only
some of the form areas are filled in, you will be able to click the learn button.
Click the learn button again, and follow the exact same steps with the next site. After doing this with
a few sites, scrapebox will have learned the platform, and it will fill out all the fields for you. If you
click the learn button again, scrapebox will show you this screen, which means that you have already
taught it the platform.

This technique is very simple but can be used to teach scrapebox different blog platforms, so if you
ever come across a platform that isn’t recognised, just teach it to scrapebox and then scrape as
many blogs as you can.
















Finding High PR Forums
A great way of getting backlinks is from forum profile links, and forum signature links. I am going to
provide below what I think the best way to find forum profile links, and how best to use the forums
you find. Forums are good for links, and traffic, so I will discuss both.
We are going to be looking for vBulletin forums for getting links because their profile links are
publicly viewable, and they allow anchor text links in your signature.
Have a look at this screenshot of a vBulletin Forum, notice any footprints?

Put that in the custom footprint area, and then add some keywords. To get more results, use more
keywords, your basic forum footprint should be,
“Powered by vBulletin” “In order to proceed, you must agree with the following rules”
Then scrape, and check the domain pagerank (not URL), then filter out all the low PR domains, and
you now have a big list of pages where you can just sign up and place your link on a publicly viewable
page ,and you can also add your signature with a few posts on some pages.
You can also scrape other types of blogs, here are some footprints for you to use. I also mention
many more forum footprints in the footprint section in the appendix.
• "powered by phpbb"
• "powered by punbb"
• “Powered by PHPbb”
• “Powered by SMF”
• “Powered by expression engine”
• “powered by Simple Machines”
Also try some of these strings to get some results too. These help you look for register pages, you
can try these with all the blog types above, or look in the footprints area for more ideas.
There are many more footprints in the appendix, and you can find some yourself by going to forums
and seeing if there is a footprint in the code, or url, or title of a site.


This appears on all vBulletin forums
Here is another footprint that appears on these forums...
Finding Forums that Guarantee Links
Here is a cool method that I have picked up recently. What we are going to do is find noted
spammers that have spammed forums, but their links are still showing. This is similar to the metod
for finding spammers auto approved blogs and this way we can know that we can also get a link on
the same site and our link will not get deleted. If they are a spammer and there link has been
missed, then there is a very good chance that ours will get missed too.
To find the spammers popular usernames, go to http://stopforumspam.com/ and note down a
bunch of spammers usernames.

Then scrape forums with these footprints in the keyword area.
• “Powered by PHPbb” inurl:spammersusername
• “Powered by vBulletin” inurl:spammersusername
• “Powered by SMF” inurl:spammersusername
• “powered by Simple Machines” inurl:spammersusername
• “powered by punBB” inurl:spammersusername
• “powered by expressionengine” inurl:spammersusername
I provide a Merge file that will automatically merge the footprints with any usernames that you put
into the keyword area of scrapebox.
Then just go to the forums that you have just scraped, sign up, and post your link.
Forums for Traffic
While on the topic of forums, I have to mention how great forums are for traffic. All you have to do
is sign up, place a link in your signature, and start posting. Guys like Terry Kyle recommend just
creating the profile without commenting, just to get a link, but they can be great for traffic too.
Make sure to answer people’s problems and try to help a lot round the forum, and people will start
flying through the links that you are providing or your signature link. Don’t forget, people are looking
for solutions in forums all the time, in fact that’s the number one reason for a forum. So take
advantage of this and make some sales.
You could outsource the creation and posting of posts to forums, but it’s a good idea for you to find
them yourself so you know they are targeted.
There are a few ways to drop your link into forums without it being cared about much, or spotted.
You can put it in your signature, you can ask a question about “this website that you went to”, you
Note down some of these,
and some of these...
can answer a question by saying “go to this site”, and you can solve an issue someone has by
offering your link as a valuable resource.
I’m sure there are more ways, but it’s important to know that forums have very responsive
communities usually, and they are communities of people that need solutions to their problems.
What better way than that is there for you to make some money?
There is one exception to this rule of forums for traffic, and that is in the internet marketing niche....
and it’s because we all know what you are doing.... there are plenty of other niches of clueless
individuals out there though, just waiting to be sold to.
Indexing Techniques
Indexing is an important part of backlinking, although these techniques that I discuss here are about
forcing indexing, either using rss, pinging or the rapid indexer. The best method is the RSS method,
as it looks the most natural. The pinging and rapid indexer methods are more dangerous, and
considered spammier, so I will mention them and explain them but not go into too much detail.
RSS
Using RSS is a great way to get your backlinks indexed fast. There are several methods to doing this,
and I’m sure you will see other techniques elsewhere, but I will show you what I do, and I think it’s
the best way to do it.
First import your list of links that you have created into the harvester, either on blogs, forums or
elsewhere. You can also load up your website and inner pages if you want them indexed.
Then export as RSS XML List, make sure you split up the entries so there are 30-40 entries per xml
list.

Then a window should pop up and you can start scanning; what this will do is scrape the title and
description for you. When this is done you can export and save it to a folder that you will remember.
Remember to split up the entries so there are around 30-40 entries per xml.
Then upload it to your server via FTP or your Cpanel. If you don’t know how to do this then look it up
on google, I recommend software called filezilla, it’s free, easy to use, and I use it. Go here to
download filezilla, http://filezilla-project.org/
Test the feed by looking up the url of the uploaded feed in your internet browser, it should show a
list of the sites that you have posted your links on.
Insert your list of confirmed links here.
Then select “Export as RSS XML List”
Copy the URL of the xml feed and go back to scrapebox. Click the rss button, and in the websites text
file you should put your xml feeds, and in the rss services file you should load up the rss services that
you will find in the rss folder where you got with your installation of scrapebox.
Now just start the RSS submission and any failed services, make sure you delete because they will be
a waste of your time in the future.


Pinging
You can use the scrapebox Ping mode to get backlinks to your sites, web2.0 pages, redirects, or
other backlinks to help with indexing. You can generate an almost unlimited number of backlinks
with this technique so it’s worthy of mentioning in my e-book, however its very spammy, and some
say it does more bad than good.
Many websites use some type of logging or statistics platform such as AWstats to track the visitors
that are going to their websites. These visitor logs that are generated are generally stored on the
sites server. Sometimes google indexes these which results in the referrer sites getting backlinks.
So all you have to do is find indexed sites that can be used with the scrapebox ping mode, then load
up those sites, and the sites that you want pinged, and start pinging.
The footprints to find indexed referrer sites for this technique are noted in the appendix.
Failing doing this yourself, here is a great pinging resource you can use to ping a list of sites,
http://www.pingfarm.com/.






Select RSS
Load your XML pages that
you posted to your server
Open up the rss folder that was created
when you installed scrapebox, and load up
rss services
Rapid Indexer – 40,000 Links
Scrapebox rapid indexer is great for indexing backlinks or creating backlinks to your sites, web2.0,
articles, redirects etc. This method works best on redirect domains and aged domains. Don’t forget,
this is a dangerous method, it’s very spammy, and should be used with caution.
I have provided with this guide a list of about 35,000 URL’s that you can use with top level domain
names, (i.e mydomain.com not mydomain.com/page1.html), and a list of about 5,000 domains that
you can use with multiple level domain names (i.e mydomain.com/file1/page2.html).
A couple of things before you use this methods, firstly, this isn’t a good method for building
backlinks, it’s a good method for getting your current backlinks indexed, or indexing or backlinking
your redirect pages. Secondly 99.99% of these are no-follow links, so this won’t affect your serps at
all. However if the links you are trying to get indexed are do-follow, then this will help them get
indexed and WILL increase your ranks.
Firstly, open up the scrapebox addon, rapid indexer. Then load up your websites that you want to
get indexed (either your redirects, or links you have posted etc.) where it says load websites. And
load up the list of rapid indexer sites where it says load services.
Then hit start!
Don’t forget, if you are using complex urls, such as www.domain.com/page2/file1/image.gif then
use the smaller 5,000 list.
If you are using top level domains, or subdomains, such as domain.com, or subdomain.domain.com,
then use the 35,000 list.
With these few methods you should have no problems getting your sites or backlinks indexed by
google.
Tip: Split up the list of rapid indexer sites so your only doing blasts of a few hundred at a time, and
in general use this technique with caution, and with aged domains or redirects.







Links From Directories
Directories are a great way to get links, and while this isn’t an automated method of building
backlinks, you can scrape as many sites as you want and then filter by PR to then post links on only
the best directory sites.
You can use the list of footprints for directory scraping to scrape thousands of directories. This list is
in the appendix, but I will mention them here as well.
• Powered by: php Link Directory
• powered by PHPLD
• Powered by WSN Links
• powered by PHP Weby
• Powered by cpLinks
• Powered by cpDynaLinks
• powered by BosDirectory
• Powered by Link manager LinkMan
• Powered by Gossamer Links
• Powered by K-Links
• Powered by In-Link
• Powered by eSyndiCat Directory Software
• Powered by: qlWebDS Pro
• Powered by Directory software by LBS
• powered by phpMyDirectory.com
• Powered by HubDir PHP directory script
Then check the PR of the domain (not the page) and you have just compiled a list of directories
where you can submit your link on high pr sites, or offer a directory submission service.















Non Backlink Methods
Scraping Emails with Scrapebox
Scrapebox can also be used as an email harvester, and even though this method is not completely
ethical, I am including it in this guide for the purpose of a complete guide on the uses of scrapebox
and if you want to use this method then on your head be it!
Extracting Emails from Classified Ads Sites
The most common example is to use a classified ads site like craigslist, but you can try this with other
classified ad sites if you like. Use this footprint,
• inurl:craigslist.org “keyword”
• inurl:location.craigslist.org “keyword”
...and then select “grab emails”

Use the footprints mentioned in the appendix to scrape a list of URL’s, then when you are done click
grab emails, and then select from harvested URL list. There you go, that wasn’t hard was it.
Extract Emails from Forums
Since forums are rife with users who have emails, and are usually targeted to a specific niche, this is
a great place to scrape emails. Also I should mention that this isn’t a foolproof method, and you
might expect to scrape about 1% of forums user’s emails, and that’s on a good day. However if
you’re looking at a forum with 500,000 members, then that’s a lot of targeted emails you can scrape,
and that’s from only one forum.
OK, first locate the forum member list. You can do this by scraping forums in whichever way you
want, and then create a footprint around the forum url, to find all the member pages. Here is the
footprint you need.
• site:www.forum-you-scraped.com inurl:memberlist.php
You can use the merge feature with this technique as well to make it easier for you.
Copy this exact text into a text file and save it as memberlist.txt, don’t change any of this text.
site:%KW% inurl:memberlist.php
Then load up your list of forum top level domains into the keyword area, and merge the file with the
sites. You should now have a list of footprints that will find all the member pages, on all the forums
that you have found.
Scrape pages from craigslist
Then Grab Emails
Start harvesting, and your results will be all the pages of those forums that have a member list on
them.
Then you can use the link extractor addon to extract all the links on those member list pages. This
will get you all the individual profile pages.
Then on those extracted links (the individual profile pages), use the email extraction option to scrape
any exposed emails.
Scraping Free PLR Articles
This method is pretty simple, and takes use of the amazing footprint again. PLR Articles are great for
sourcing content for your sites, web2.0 properties, articles, or anything you can imagine. A good idea
is to spin whatever you find using the best spinner, but you don’t have to if you don’t want to.
So here is the footprint, or variations you can use.
• Intitle:”Free”+”PLR” yourniche
• Inurl:”Free”+”PLR” yourniche
• Intitle: ”Free”+”PLR”+”download” yourniche
• Inurl:”Free”+”PLR”+”download” yourniche
Now I’m sure with your incredible newfound knowledge of footprints that you have acquired from
this book, you will understand exactly what the footprints mean and how you can manipulate the,
but this is a good start to finding free plr articles.
Scraping Images
This is an incredible technique and one of my favourite. I use it all the time for everything you could
possibly imagine. You can scrape images for your sites, web2.0 properties, desktop backgrounds,
anything you want really, and here is how you do it.
Copy this list of footprints into a text file, and save as imagesfootprint.txt, somewhere that you will
remember.
inurl:istock -site:istock.com
inurl:shutterstock -site:shutterstock.com
inurl:bigstockphoto -site:bigstockphoto.com
inurl:jupiterimages -site:jupiterimages.com
inurl:dreamstime -site:dreamstime.com
inurl:fotolia -site:fotolia.com
inurl:canstockphoto -site:canstockphoto.com
inurl:inmagine -site:inmagine.com
Then, add the keywords that you want into the keyword section of scrapebox, so if you’re looking for
pictures of ferrari’s, put in the keyword section, Ferrari, best Ferrari, Ferrari cars, Ferrari sports car,
etc.
Then hit the Merge button, and load in your list of footprints. Now there should be your footprints,
followed by the keywords that you wrote in.
Then load up the google image grabber addon. Load up your keywords from the scrapebox keyword
list, and then choose how many photos you want from each keyword, and hit locate image url’s.
Then hit select target, and choose where you want your images to be saved, you can also create a
new folder if you wish. When that is done, hit download, and scrapebox will save all the images you
just found into the folder.



You can then browse through at your will. Most of these will be high quality images, and you should
be VERY happy with the results, this is the absolute number one way of finding images online, and I
swear to it.
You will get the odd watermarked image here and there, but you don’t have to use those, just use
the ones you like and delete any others, or save your favourites into another folder and delete the
original folder.







Load keywords that you merged with the
footprints from the harvester.
Select target folder for your images
Tell it your specifications
Locate image URL’s, and then Download or View
Final Thoughts
Avoiding the Box of Sand
The sandbox is a term coined by internet marketing, and it means your site has disappeared of the
serps, and it’s usually a result of blasting a new site with links. I am going to talk about the sandbox
shortly before going into the various scraping techniques because I think it’s very important, not only
to avoid the sandbox, but to minimize your footprint and learn some techniques for good backlink
structures. This is the number one fear when it comes to blasting links using scrapebox, or any
extreme amount of link building, and it’s a perfectly reasonable fear. In my experience this happens
when too many links are built to a domain too fast. And it seems that Google penalizes you for
excessive link building, or what appears to be excessive link building, and they push you to the end
of the queue so to speak. You can be in the sandbox for days or weeks, however I have found that
you always come out stronger, so if you ever have fallen prey to Googles box of sand, then don’t
worry, just slow down link building but keep it consistent and you will come back stronger.
I will discuss the methods to avoiding the sandbox here, just so you can get the fear out of the way,
and so you know that there are other options available and then we will go into the various
scrapebox techniques.
Aged Domains
If you use aged domains, you can rank faster for keyword terms that might be more competitive.
Also the fact that it is an aged domain means you can blast links to it and never worry about the
sandbox, resulting in faster ranking.
Now, don’t get me wrong, exact match domains for example www.yourkeyword.com (no hyphens),
will rank extremely fast for keyword that have less than 50,000 competing pages, but any more
competition than that and you should look into getting an aged domain.
If your competition in the top ten results all have aged domains, then it’s much more important to
get an aged domain.
Where to get Aged domains
My number one place to get aged domains is register compass . And here is a link to the site
http://www.registercompass.com. There are other free options to finding domains, but I’m sure
your skills of research will help you in that respect. Try domainface to start, or search in google “find
aged domain names, here is another free site to use; http://www.aged-domain-
finder.com/search.php. Also, look out for Terry Kyle’s forthcoming
http://domainbuyingblackbelt.com/ that can help you look for and buy domains.
If you’re using Register compass, you want to filter your search to find domains that are older than 4
years, and are expiring/on auction in godaddy. Try and get as many domains as you can into your
auction watching list on godaddy, and then when you see a domain that has only a few minutes or
hours left but no one has bid on it, you can grab it for $10. You might want to check the domain on
thewaybackmachine to validate its age. Usually the more aged the domain is, the higher you will
pay, but I have used this method many times to get aged domains for cheap.
When you have your aged domain setup (it takes a week after you win the bid) you can build the site
with the on page SEO that you need, and blast it with links.
No sandbox for you!
New Domains
When working with new domains, you don’t want to blast the domain with too many links because
you might end up in the sandbox, however with the following methods it won’t be a problem for
you.
Firstly, you are going to want to start building links consistently and slowly directing to your new
money site. You can use scrapebox to find .edu/gov links, high pr blogs and forums, and make a few
links every day, but only a few. No blasts of thousands of links from scrapebox... no, you don’t want
to visit the box of sand do you? I discuss later in this guide how you can find high pr forums, blogs,
and .edu/.gov links.
So, with a new domain, make sure you build good links, but few of them, a maximum of 50 per week
straight to the money site, however here are a couple of ways that you CAN use scrapebox to blast
links to a new domain without getting sandboxed, here are those methods.
The Redirect Method
This is an easy and brilliant method to avoid the sandbox. Buy a cheap domain, my favourites are
.info domains (buy for a couple of bucks) and redirect this website with a .301 redirect straight to
your money site. This acts as a proxy that stops your site being sent to the sandbox.
Once you have setup your redirect to your money site, you can blast the .info domain with links, and
all the ling juice gets redirected to your money site, but your site is protected from the sandbox.
To expand on this method to the max, you can do what I do. Buy twenty .info domains, redirect all to
your money site, and then start blasting each one, or all together. This link building power on a new
site is amazing, and you will be astonished with the results.
The Web2.0 Method
This method is easy, and fun because of all the possibilities. You build a web2.0 property (I have a list
of them in the appendix section) and fill it up with some filler content you can find. (See my “finding
free PLR articles” section in this e-book) Then place your link on the web2.0 property, and point it to
your money site using your anchor text.
Then blast the web2.0 property with scrapebox. This method is not as powerful as the redirect
method, however it works great to stop you going to the sandbox, and it creates very powerful
web2.0 links to your money site.
Link Wheels
You can also use the Web2.0 technique with link wheels which is an incredibly powerful technique
when used in this way. Here is a diagram to illustrate what I’m saying.

Link Columns
This is another way of setting up your linking structure. These are also called closed link wheels,
because they are exactly the same as a link wheel, but with one web2.0 to web2.0 link missing.


Web 2.0 Nets/Webs
You can create more complicated structures, with web2.0 levels linking to inner levels, linking back
to your site, or link wheels surrounding spokes of your other link wheels, which then surround a net
of web2.0 which surrounds your money site, then using scrapebox to blast all the web2.0 properties.
The possibilities are endless, but you now have the basic structures to powerful web2.0 linking, and
I’m sure you will go and have fun with it. Many people think complicated is good, and that in some
way google wont detect what you are doing, and the more complicated the linking plan is, the better
it is. I however, could not disagree more. I think if google see a strong and structured linking plan,
they give it more quality than a jumble of links all linking to other links, in some sort of impossible to
decipher net of links. The basic link wheels and link columns are what I usually use, and I don’t find it
necessary to do anything more complicated than that.
Once you have setup your link wheels/columns/nets, you can blast all the properties with scrapebox
adding more and more power to your link structure.
So now you know how to avoid the sandbox, and how you can blast links with scrapebox, not
necessarily to your money site (unless it’s aged) but redirects or web2.0 pointing to your money site.
Don’t forget, I am only talking about blasts of hundreds or thousands of links here. If you find a good
.edu resource, and want to get one or two links, then pointing them straight to your money page is
highly recommended.


A Good Place to Start
Firstly I should say that before learning about using scrapebox you should have some understanding
of SEO, if not, then get a course on SEO. Here is a good starter, its free, http://www.the-blueprint-
guys.com/mmoshort.html.
If you already know a little bit about SEO then you should be affiliated with clickbank, or a cpa
network or adwords or something like that. Either one will consist of you getting a site, or blog,
article or web2.0 page that you will need to have an affiliate link of some sort on it. The next step is
SEO, and that is where scrapebox can help you.
Start using your techniques to build lists of forums you can get your link, lists of auto approved blogs
that you can blast, lists of high pr blogs that need manual commenting, .edu links, .gov links, and any
other type of links mentioned in this guide.
A new domain can take around 50 links per week and be safe, but if you are using the avoiding the
sandbox methods on a new site, then you can blast a lot more to it.
An aged domain, web2.0 or ezine, can take as many links as you can blast so go ahead.
Then if you have done your keyword research correctly, and kept up the link building for a minimum
of 90 days, then you should see significant improvements in your rankings.
How Best to use Scrapebox
Using all the techniques mentioned in this e-book I mostly use the following for backlinking. High PR
blogs that require moderation and have very low OBL, I comment using ultra related comments and
usually get them approved. Next most important is auto commenting on moderated blogs that have
a lower pr, and use targeted automated spun comments.
Then I build lists of forums to get profile links on, I can outsource this or do it myself.
And then last but not least, always build your auto approved blogs list, starting by building a list of
unique domains with the various methods, and then scraping every page from each domain and
trimming to the best high PR pages.
Now this is just the way I do it, you can copy me or work out your own system, but the important
thing is to keep building links in a structured but slightly random way.











A Final Word from Me
Well, I hoped enjoyed the Ultimate Scrapebox Advantage, and I hope you have learned a lot. My aim
was not teach you the techniques so you can go and parrot them like a newbie, but I wanted to
teach you the fundamentals behind the techniques so you can go and use this tool to its absolute
maximum potential.
Hopefully from all these techniques and examples you will now know how to scan a site and find a
footprint, how to use that footprint to find other sites, how to get backlinks on blogs, forums, how to
get your comments approved, how to index those pages, and a multitude of other techniques.
What I want is for you to be your own scrapebox expert, so you can be faced with a problem and
come up with a solution yourself and after learning this e-book, and truly knowing the content, I
believe that is what we would have achieved.
It has been a pleasure for me to write this e-book, and I have loved every second of it. I hope all the
techniques and tactics used in this guide bring you as much enjoyment and satisfaction as they have
brought me, scrapebox is truly a wonderful tool... and now you know why.
Happy scraping,
Kind Regards,
Josh M

















Appendix Section
Important links/Resources/Tools
Guides
User guide
http://www.scrapebox.com/usage-guide/
Official YouTube channel
http://www.youtube.com/user/scrapebox/
Other YouTube channels
http://www.youtube.com/user/RinTinTindy
http://www.youtube.com/user/ScrapeBoxBlueprint

Tools
Notepad++ http://notepad-plus-plus.org/
Forum profile creator - http://www.mediafire.com/?3p9c00otb9i25c6
Forum profile creator thread - Thanks to – Crazyflx
Aged domains
http://www.aged-domain-finder.com/search.php
http://www.registercompass.com/
http://www.archive.org/web/web.php
Find top 200 competing pages for a keyword
http://seoserp.com/google_page_rank/1000_SERP.asp
Proxies
www.yourprivateproxy.com/413.html
Dofollowblogs lists/resources
http://www.whydowork.com/blog/blogging-tips/558/
http://www.myseoblog.net/2008/06/21/lists-and-directories-of-dofollow-blogs/
http://nicusor.com/do-follow-list/
http://www.jimkarter.com/list-of-dofollow-blogs
Dofollow directory/search engine
http://www.dofollow.us/
http://www.commenthunt.com/
http://www.inlineseo.com/dofollowdiver/
http://w3ec.com/dofollow/
RSS
http://links2rss.com/
http://www.feedlisting.com/submit.php
Pinging
http://www.pingfarm.com/
Forums
Finding Spammers usernames: http://stopforumspam.com/
Web 2.0 Sites
Here is a List of high pr web2.0 sites, Thanks to Jared Alberghini of the warrior forum:
PR9

http://wordpress.com

PR8

http://blogger.com
http://livejournal.com
http://vox.com

PR7

http://blogsome.com
http://bravenet.com ( http://viviti.com )
http://edublogs.org
http://friendster.com
http://knol.google.com
http://home.spaces.live.com (MSN Spaces http://msnspaces.com )
http://squidoo.com
http://tumblr.com
http://weebly.com
http://webs.com

PR6

http://blog.co.uk
http://diaryland.com
http://gather.com
http://hubpages.com (Profile points must be 75 for do-follow links)
http://tblog.com

PR5

http://20six.co.uk
http://bigadda.com
http://blog.ca
http://blogskinny.com
http://blogstream.com
http://blogwebsites.net
http://blurty.com
http://clearblogs.com
http://www.easyjournal.com
http://free-conversant.com
http://freeflux.net
http://opendiary.com
http://sosblog.com
http://tabulas.com
http://terapad.com
http://thoughts.com
http://upsaid.com
http://viviti.com

PR4

http://blogeasy.com
http://bloghi.com
http://bloghorn.com
http://blogigo.com
http://blogono.com
http://blogr.com
http://blogstudio.com
http://blogtext.org
http://bloxster.net
http://freeblogit.com
http://insanejournal.com
http://journalfen.net
http://journalhub.com
http://mynewblog.com
http://netcipia.com
http://shoutpost.com
http://thediary.org
http://wikyblog.com
Footprints Continued
Here is a massive list of footprints that you can use for a multitude of different situations. Please
note that I will never be able to list all the footprints, because there are going to be more and more
discovered every day, I know that I am always finding more, and so will you.
Use these footprints in the custom footprint section, with your keywords below, or use them in the
keyword section with your keywords already in the footprints if you want to use more than one at a
time. Or you can use the merge feature with a list of keywords, which you should know about from
reading the footprints section at the beginning of this e-book.
Don’t forget, that whatever footprint you are using, you can always add the operators that you
learned at the beginning of this e-book. For example you can add a “keyword phrase” to any
footprint, to only find those sites with that keyword phrase.
Also, please note that these are not all my creations, I have built this list of footprints from so many
sites that I can’t even remember, but here they all are packaged together just for you.
Blogs
“powered by wordpress”
“Powered by BlogEngine”
“Powered by Blogsmith”
“powered by Typepad”
“powered by scoop”
“powered by b2evolution”
“powered by ExpressionEngine”

Things to add
"leave a comment"
"leave a reply"
"reply"
“comment”

Forums
“Powered by PHPbb”
“Powered by vBulletin”
“Powered by SMF”
“powered by Simple Machines”
inurl:/index.php?action=register
“powered by punBB”
“powered by expressionengine”
inurl:/member/register/ "powered by expressionengine"
Powered by SMF inurl:"register.php"
Powered by vBulletin inurl:forum
Powered by vBulletin inurl:forums
Powered by vBulletin inurl:/forum
Powered by vBulletin inurl:/forums
Powered by vBulletin inurl:"register.php"

act=post&forum=19
forums/show/
module=posts&action=insert&forum_id
posts/list
/user/profile/
/posts/reply/
new_topic.jbb?
"powered by javabb 0.99"
login.jbb
new_member.jbb
reply.jbb
/cgi-bin/forum/
cgi-bin/forum.cgi
/registermember
listforums?
"forum mesdiscussions.net
version"
index.php?action=vtopic
"powered by forum software minibb"
index.php?action=registernew
member.php?action=register
forumdisplay.php
newthread.php?
newreply.php?
/phorum/
phorum/list.php
"this forum is powered by phorum"
phorum/posting.php
phorum/register.php
phpbb/viewforum.php?
/phpbb/
phpbb/profile.php?mode=register
phpbb/posting.php?mode=newtopic
phpbb/posting.php?mode=reply
/phpbb3/
phpbb3/ucp.php?mode=register
phpbb3/posting.php?mode=post
phpbb3/posting.php?mode=reply
/punbb/
punbb/register.php
"powered by phpbb"
"powered by punbb"
/quicksilver/
"powered by quicksilver forums"
index.php?a=forum
index.php?a=register
index.php?a=post&s=topic
/seoboard/
"powered by seo-board"
seoboard/index.php?a=vforum
index.php?a=vtopic
/index.php?a=register
"powered by smf 1.1.5"
"index.php?action=register"
/index.php?board
"powered by ubb.threads"
ubb=postlist
ubb=newpost&board=1
"ultrabb"
view_forum.php?id
new_topic.php?
login.php?register=1
"powered by vbulletin"
vbulletin/register.php
/forumdisplay.php?f=
newreply.php?do=newreply
newthread.php?do=newthread
"powered by bbpress"
bbpress/topic.php?id
bbpress/register.php
"powered by the unclassified newsboard"
forum.php?req
forum.php?req=register
/unb/
"powered by usebb forum software"
/usebb/
topic.php?id
panel.php?act=register
"a product of lussumo"
comments.php?discussionid=
/viscacha/
forum.php?s=
"powered by viscacha"
/viscacha/register.php
/post?id=
post/printadd?forum
community/index.php
community/forum.php?
community/register.php
"powered by xennobb"
"hosted for free by zetaboards"
"powered by yaf"
yaf_rules.aspx
yaf_topics
postmessage.aspx
register.aspx
post/?type
action=display&thread
index.php
index.php?fid
forums register
register i am over 13 years of age forum
discussion board register
bulletin board register
message board register
phpbb register forum
punbb register forum
forum signup
vbulletin forum signup
SMF register forum
register forum Please Enter Your Date of Birth
forums - Registration Agreement
forum Whilst we attempt to edit or remove any messages containing inappropriate, sexually
orientated, abusive, hateful, slanderous
forum By continuing with the sign up process you agree to the above rules and any others that the
Administrator specifies.
forum In order to proceed, you must agree with the following rules:
forum register I have read, and agree to abide by the
forum To continue with the registration procedure please tell us when you were born.
forum I am at least 13 years old.
Forum Posted: Tue May 05, 2009 8:24 am Memberlist Profile
View previous topic :: View next topic forums
You cannot post new topics in this forum
proudly powered by bbPress
bb-login.php
bbpress topic.php
Powered by PunBB viewforum.php
Powered by PunBB register.php
The Following User Says Thank You to for this post
BB code is On
Similar Threads All times are GMT +1
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to
register before you can post
Hot thread with no new posts
Thread is closed
There are 135 users currently browsing forums.
forums post thread
forums new topic
forums view thread
forums new replies
forum post thread
forum new topic
forum view thread
forum new replies
add topic
new topic
phpbb
view topic forum
add message
send message
post new topic
new thread forum
send thread forum
VBulletin forum
Quick Reply Quote message in reply?
Currently Active Users: 232 (0 members and 232 guests)
Currently Active Users: members and guests
Forums Posting Statistics Newest Member
Users active in past 30 minutes: SMF
Users active in past 30 minutes: Most Online Today Most Online Ever
Most Online Today Most Online Ever Forums
Currently Active Users: 18 (0 members and 18 guests)
Users active today: 15478 (158 members and 15320 guests)
Threads: 673, Posts: 7,321, Total Members: 376
Add this forum to your Favorites List! Threads in Forum :
Threads in Forum Hot thread with no new posts
"powered by vbulletin"
"powered by yabb"
"powered by ip.board"
"powered by phpbb"
"powered by phpbb3"
"powered by invision power board"
"powered by e-blah forum software"
"powered by xmb"
"powered by: fudforum"
"powered by fluxbb"
"powered by forum software minibb"
"this forum is powered by phorum"
"powered by punbb"
"powered by quicksilver forums"
"powered by seo-board"
"powered by smf"
"powered by ubb.threads"
"powered by the unclassified newsboard"
"powered by usebb forum software"
"powered by xennobb"
"powered by yaf"
"Powered By MyBB"
"Powered by IP.Board"
powered by phpbb
forums post thread
forums new topic
forums view thread
forums new replies
forum post thread
forum new topic
forum view thread
forum new replies
forum
phorum




add topic
new topic
phpbb
yabb
ipb
posting
add message
send message
post new topic
new thread
send thread
vbulletin
bbs
intext:"powered by vbulletin"
intext:"powered by yabb"
intext:"powered by ip.board"
intext:"powered by phpbb"
inanchor:vbulletin
inanchor:yabb
inanchor:ip.board
inanchorhpbb
/board
/board/
/foren/
/forum/
/forum/?fnr=
/forums/
/sutra
act=reg
act=sf
act=st
bbs/ezboard.cgi
bbs1/ezboard.cgi
board
board-4you.de
board/ezboard.cgi
boardbook.de
bulletin
cgi-bin/ezboard.cgi
invision
kostenlose-foren.org
kostenloses-forum.com
list.php
lofiversion
modules.php
newbb
newbbs/ezboard.cgi
onlyfree.de/cgi-bin/forum/
phpbbx.de
plusboard.de
post.php
profile.php
showthread.php
siteboard.de
thread
topic
ubb
ultimatebb
unboard.de
webmart.de/f.cfm?id=
xtremeservers.at/board/
yooco.de
forum
phorum
add topic
new topic
phpbb
yabb
ipb
posting
add message
send message
post new topic
new thread
send thread
vbulletin
bbs
cgi-bin/forum/
/cgi-bin/forum/blah.pl
"powered by e-blah forum software"
"powered by xmb"
/forumdisplay.php?
/misc.php?action=
member.php?action=
"powered by: fudforum"
index.php?t=usrinfo
/index.php?t=thread
/index.php?t=
index.php?t=post&frm_id=
"powered by fluxbb"
/profile.php?id=
viewforum.php?id
login.php
register.php
profile.forum?
posting.forum&mode=newtopic
post.forum?mode=reply
"powered by icebb"
index.php?s=
act=login&func=register

Directories
Powered by: php Link Directory
powered by PHPLD
Powered by WSN Links
powered by PHP Weby
Powered by cpLinks
Powered by cpDynaLinks
powered by BosDirectory
Powered by Link manager LinkMan
Powered by Gossamer Links
Powered by K-Links
Powered by In-Link
Powered by eSyndiCat Directory Software
Powered by: qlWebDS Pro
Powered by Directory software by LBS
powered by phpMyDirectory.com
Powered by HubDir PHP directory script

Ping Mode
"Generated by Webalizer Version 2.01"
"Generated by Webalizer Version 2.02"
'Generated by Webalizer Version 2.03"
"Generated by Webalizer Version"
"Created by awstats"
'Advanced Web Statistics 5.5"
"/webalizer/usage"
"/usage/usage"
"/statistik/usage"
"/stats/usage"
"/stats/daily/"
"/stats/monthly/"
"/stats/top"
"/wusage/"
"/logs/awstats.pl"
"/webstats/awstats.pl"
"/awstats.pl"
inurl:/usage_;103
inurl:/awstats.pl?lang=;23
inurl:/awstats.pl?config=;11
inurl:/awstats.pl?output=;9
usage statistics "Summary Period: february 2009" - (put last month here so you know that google has
indexed it)
usage statistics "Summary Period: march 2009"
Generated by Webalizer
inurl:awstats.pl intitle:statistics
Created by awstats
inurl:usage_200811 html
produced by wusage
inurl:twatch/latest html
inurl:stats/REFERRER.html

.edu/.gov Blogs
inurl:.gov+inurl:blog
site:.edu inurl:wp-login.php +blog
site:.gov inurl:wp-login.php +blog
site:.edu inurl:”wp-admin” +login
site:.edu inurl:blog “post a comment”
site:.edu inurl:blog “post a comment” –“comments closed” -”you must be logged
in”
“keyword”
site:.edu “no comments” +blogroll -”posting closed” -”you must be logged in” -
”comments are closed”
site:.gov “no comments” +blogroll -”posting closed” -”you must be logged in” -
”comments are closed”
inurl:(edu|gov) “no comments” +blogroll -”posting closed” -”you must be logged
in” -
”comments are closed”
site:.edu inurl:blog “comment” -”you must be logged in” -”posting closed” -
”comment
closed”
“keyword”
keyword blog site:.edu
keyword +inurl:blog site:.edu

.edu/.gov Forums
edu inurl:login (Create an account)
site:edu “powered by vbulletin”
inurl:.edu/phpbb2
inurl:.edu/ (Powered by Invision Power Board)
site:edu “powered by SMF”
edu forums sites,gov forums sites
site:.mil
site:edu inurl:login (Create an account)
site:edu "powered by vbulletin"
inurl:.edu/phpbb2
inurl:.edu/ (Powered by Invision Power Board)
site:edu "powered by SMF"
keyword forum site:.edu
keyword forum site:.gov
keyword blog site:.gov
inurl:.gov +inurl:forum + inurl:register
inurl:.gov +inurl:forum
inurl:.edu/phpbb inurl:register
inurl:edu forum
inurl:gov forum
inurl:.edu+inurl:forum

Email Harvesting
inurl:craigslist.org “keyword”
inurl:location.craigslist.org “keyword”

Comment Footprints
• I am impressed by the quality of information on this website. There are a lot of good
resources here. I am sure I will visit this place again soon.
• Very useful info. Hope to see more posts soon!
• Great blog post. It’s useful information.
• Hi, I’ve been a lurker around your blog for a few months. I love this article and your entire
site! Looking forward to reading more!
• I am impressed by the quality of information on this website. There are a lot of good
resources here. I am sure I will visit this place again soon.
• Useful info. Hope to see more good posts in the future.
• Nice job, it’s a great post. The info is good to know!
• Top post. I look forward to reading more. Cheers
• I am very enjoyed for this blog. Its an informative topic. It help me very much to solve some
problems. Its opportunity are so fantastic and working style so speedy. I think it may be help
all of you. Thanks.
• “this is very interesting. thanks for that. we need more sites like this. i commend you on your
great content and excellent topic choices.”
• “BestAntivirusSoftware.co.nz is New Zealand’s No. FREE"
• “You really know your stuff... Keep up the good work!”
• This is a really good site post, im delighted I came across it. Ill be back down the track to
check out other posts that
• Really cool post, highly informative and professionally written..Good Job
• Then more friends can talk about this problem
• You did the a great work writing and revealing the hidden beneficial features of
• “I had to refresh the page times to view this page for some reason, however, the
information here was worth the wait.”
• This is a really good read for me. Must agree that you are one of the coolest blogger I ever
saw. Thanks for posting this useful information. This was just what I was on looking for. I'll
come back to this blog for sure!
• I admire what you have done here. I love the part where you say you are doing this to give
back but I would assume by all the comments that is working for you as well. Do you have
any more info on this?
• Thanks for informative post. I am pleased sure this post has helped me save many hours of
browsing other similar posts just to find what I was looking for. Just I want to say: Thank
you!
• Dude.. I am not much into reading, but somehow I got to read lots of articles on your blog.
Its amazing how interesting it is for me to visit you very often. -
• This is my first time i visit here. I found so many entertaining stuff in your blog, especially its
discussion. From the tons of comments on your posts, I guess I am not the only one having
all the enjoyment here! Keep up the excellent work.
• Excellent read, I just passed this onto a colleague who was doing a little research on that.
And he actually bought me lunch because I found it for him smile So let me rephrase that.
• “"Its always good to learn tips like you share for blog posting. As I just started posting
comments for blog and facing problem of lots of rejections. I think your suggestion would be
helpful for me. I will let you know if its work for me too."
• 'Thank you for this blog. That's all I can say. You most definitely have made this blog into
something thats eye opening and important. You clearly know so much about the subject,
youve covered so many bases. Great stuff from this part of the internet. Again, thank you for
this blog."
• Apple iPod World provides free information, reviews on all products related to the Apple
iPod, these include the iPod Classic, Touch, Nano and Shuffle.
• Multicast Wireless is a mission-based, cutting edge, progressive multimedia organization
located in Huntsville, Alabama.
• Nice Website. You should think more about RSS Feeds as a traffic source. They bring me a
nice bit of traffic
• Useful information shared..Iam very happy to read this article..thanks for giving us nice
info.Fantastic walk-through. I appreciate this post.
• I agree with your thought.Thank you for your sharing.
• this is something i have never ever read.very detailed analysis.
• That's great, I never thought about Nostradamus in the OR
• Subsequently, after spending many hours on the internet at last We've uncovered an
individual that definitely does know what they are discussing many thanks a great deal
wonderful post
• “Nice Post. It’s really a very good article. I noticed all your important points. Thanks"
• I think so. I think your article will give those people a good reminding. And they will express
thanks to you later
• Thanks for the nice blog. It was very useful for me. Keep sharing such ideas in the future as
well. This was actually what I was looking for, and I am glad to came here! Thanks for sharing
the such information with us
• I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have you
bookmarked to check out new stuff you post
General edu (try these with .gov tld as well)
site:.edu "forums register"
site:.edu "register iam over 13 years of age forum"
site:.edu "discussion board register"
site:.edu "bulletin board register"
site:.edu "message board register"
site:.edu "phpbb register forum"
site:.edu "punbb register forum"
site:.edu "forum signup"
site:.edu "vbulletin forum signup"
site:.edu "SMF register forum"
site:.edu "register forum Please Enter Your Date of Birth"
site:.edu "forums - Registration Agreement"
site:.edu "forum Whilst we attempt to edit or remove any messages containing inappropriate, sexually
orientated, abusive,
hateful, slanderous"
site:.edu "forum By continuing with the sign up process you agree to the above rules and any others
that the Administrator
specifies."
site:.edu "forum In order to proceed, you must agree with the following rules:"
site:.edu "forum register I have read, and agree to abide by the"
site:.edu "forum To continue with the registration procedure please tell us when you were born."
site:.edu "forum I am at least 13 years old."
site:.edu "Forum Posted: Tue May 05, 2009 8:24 am Memberlist Profile"
site:.edu "View previous topic :: View next topic forums"
site:.edu "You cannot post new topics in this forum"
site:.edu "proudly powered by bbPress"
site:.edu "bb-login.php"
site:.edu "bbpress topic.php"
site:.edu "Powered by PunBB viewforum.php"
site:.edu "Powered by PunBB register.php"
site:.edu "The Following User Says Thank You to for this post"
site:.edu "BB code is On"
site:.edu "Similar Threads All times are GMT +1"
site:.edu "If this is your first visit, be sure to check out the FAQ by clicking the link above. You may
have to register before
you can post"
site:.edu "Hot thread with no new posts"
site:.edu "Thread is closed"
site:.edu "There are 135 users currently browsing forums."
site:.edu "forums post thread"
site:.edu "forums new topic"
site:.edu "forums view thread"
site:.edu "forums new replies"
site:.edu "forum post thread"
site:.edu "forum new topic"
site:.edu "forum view thread"
site:.edu "forum new replies"
site:.edu "add topic"
site:.edu "new topic"
site:.edu "phpbb"
site:.edu "view topic forum"
site:.edu "add message"
site:.edu "send message"
site:.edu "post new topic"
site:.edu "new thread forum"
site:.edu "send thread forum"
site:.edu "VBulletin forum"
site:.edu "Quick Reply Quote message in reply?"
site:.edu "Currently Active Users: 232 (0 members and 232 guests)"
site:.edu "Currently Active Users: members and guests"
site:.edu "Forums Posting Statistics Newest Member"
site:.edu "Users active in past 30 minutes: SMF"
site:.edu "Users active in past 30 minutes: Most Online Today Most Online Ever"
site:.edu "Most Online Today Most Online Ever Forums"
site:.edu "Currently Active Users: 18 (0 members and 18 guests)"
site:.edu "Users active today: 15478 (158 members and 15320 guests)"
site:.edu "Threads: 673, Posts: 7,321, Total Members: 376"
site:.edu "Add this forum to your Favorites List! Threads in Forum :"
site:.edu "Threads in Forum Hot thread with no new posts"
site:.edu "powered by vbulletin"
site:.edu "powered by yabb"
site:.edu "powered by ip.board"
site:.edu "powered by phpbb"
site:.edu "powered by phpbb3"
site:.edu "powered by invision power board"
site:.edu "powered by e-blah forum software"
site:.edu "powered by xmb"
site:.edu "powered by: fudforum"
site:.edu "powered by fluxbb"
site:.edu "powered by forum software minibb"
site:.edu "this forum is powered by phorum"
site:.edu "powered by punbb"
site:.edu "powered by quicksilver forums"
site:.edu "powered by seo-board"
site:.edu "powered by smf"
site:.edu "powered by ubb.threads"
site:.edu "powered by the unclassified newsboard"
site:.edu "powered by usebb forum software"
site:.edu "powered by xennobb"
site:.edu "powered by yaf"
site:.edu "Powered By MyBB"
site:.edu "Powered by IP.Board"
site:.edu "powered by phpbb"
site:.edu "forums post thread"
site:.edu "forums new topic"
site:.edu "forums view thread"
site:.edu "forums new replies"
site:.edu "forum post thread"
site:.edu "forum new topic"
site:.edu "forum view thread"
site:.edu "forum new replies"
site:.edu "forum"
site:.edu "phorum"
site:.edu "add topic"
site:.edu "new topic"
site:.edu "phpbb"
site:.edu "yabb"
site:.edu "ipb"
site:.edu "posting"
site:.edu "add message"
site:.edu "send message"
site:.edu "post new topic"
site:.edu "new thread"
site:.edu "send thread"
site:.edu "vbulletin"
site:.edu "bbs"
site:.edu "/board"
site:.edu "/board/"
site:.edu "/foren/"
site:.edu "/forum/"
site:.edu "/forum/?fnr="
site:.edu "/forums/"
site:.edu "/sutra"
site:.edu "act=reg"
site:.edu "act=sf"
site:.edu "act=st"
site:.edu "bbs/ezboard.cgi"
site:.edu "bbs1/ezboard.cgi"
site:.edu "board"
site:.edu "board-4you.de"
site:.edu "board/ezboard.cgi"
site:.edu "boardbook.de"
site:.edu "bulletin"
site:.edu "cgi-bin/ezboard.cgi"
site:.edu "invision"
site:.edu "kostenlose-foren.org"
site:.edu "kostenloses-forum.com"
site:.edu "list.php"
site:.edu "lofiversion"
site:.edu "modules.php"
site:.edu "newbb"
site:.edu "newbbs/ezboard.cgi"
site:.edu "onlyfree.de/cgi-bin/forum/"
site:.edu "phpbbx.de"
site:.edu "plusboard.de"
site:.edu "post.php"
site:.edu "profile.php"
site:.edu "showthread.php"
site:.edu "siteboard.de"
site:.edu "thread"
site:.edu "topic"
site:.edu "ubb"
site:.edu "ultimatebb"
site:.edu "unboard.de"
site:.edu "webmart.de/f.cfm?id="
site:.edu "xtremeservers.at/board/"
site:.edu "yooco.de"
site:.edu "forum"
site:.edu "phorum"
site:.edu "add topic"
site:.edu "new topic"
site:.edu "phpbb"
site:.edu "yabb"
site:.edu "ipb"
site:.edu "posting"
site:.edu "add message"
site:.edu "send message"
site:.edu "post new topic"
site:.edu "new thread"
site:.edu "send thread"
site:.edu "vbulletin"
site:.edu "bbs"
site:.edu "cgi-bin/forum/"
site:.edu "/cgi-bin/forum/blah.pl"
site:.edu "powered by e-blah forum software"
site:.edu "powered by xmb"
site:.edu "/forumdisplay.php?"
site:.edu "/misc.php?action="
site:.edu "member.php?action="
site:.edu "powered by: fudforum"
site:.edu "index.php?t=usrinfo"
site:.edu "/index.php?t=thread"
site:.edu "/index.php?t="
site:.edu "index.php?t=post&frm_id="
site:.edu "powered by fluxbb"
site:.edu "/profile.php?id="
site:.edu "viewforum.php?id"
site:.edu "login.php"
site:.edu "register.php"
site:.edu "profile.forum?"
site:.edu "posting.forum&mode=newtopic"
site:.edu "post.forum?mode=reply"
site:.edu "powered by icebb"
site:.edu "index.php?s="
site:.edu "act=login&func=register"
site:.edu "act=post&forum=19"
site:.edu "forums/show/"
site:.edu "module=posts&action=insert&forum_id"
site:.edu "posts/list"
site:.edu "/user/profile/"
site:.edu "/posts/reply/"
site:.edu "new_topic.jbb?"
site:.edu "powered by javabb 0.99"
site:.edu "login.jbb"
site:.edu "new_member.jbb"
site:.edu "reply.jbb"
site:.edu "/cgi-bin/forum/"
site:.edu "cgi-bin/forum.cgi"
site:.edu "/registermember"
site:.edu "listforums?"
site:.edu "forum mesdiscussions.net"
site:.edu "version"
site:.edu "index.php?action=vtopic"
site:.edu "powered by forum software minibb"
site:.edu "index.php?action=registernew"
site:.edu "member.php?action=register"
site:.edu "forumdisplay.php"
site:.edu "newthread.php?"
site:.edu "newreply.php?"
site:.edu "/phorum/"
site:.edu "phorum/list.php"
site:.edu "this forum is powered by phorum"
site:.edu "phorum/posting.php"
site:.edu "phorum/register.php"
site:.edu "phpbb/viewforum.php?"
site:.edu "/phpbb/"
site:.edu "phpbb/profile.php?mode=register"
site:.edu "phpbb/posting.php?mode=newtopic"
site:.edu "phpbb/posting.php?mode=reply"
site:.edu "/phpbb3/"
site:.edu "phpbb3/ucp.php?mode=register"
site:.edu "phpbb3/posting.php?mode=post"
site:.edu "phpbb3/posting.php?mode=reply"
site:.edu "/punbb/"
site:.edu "punbb/register.php"
site:.edu "powered by phpbb"
site:.edu "powered by punbb"
site:.edu "/quicksilver/"
site:.edu "powered by quicksilver forums"
site:.edu "index.php?a=forum"
site:.edu "index.php?a=register"
site:.edu "index.php?a=post&s=topic"
site:.edu "/seoboard/"
site:.edu "powered by seo-board"
site:.edu "seoboard/index.php?a=vforum"
site:.edu "index.php?a=vtopic"
site:.edu "/index.php?a=register"
site:.edu "powered by smf 1.1.5"
site:.edu "index.php?action=register"
site:.edu "/index.php?board"
site:.edu "powered by ubb.threads"
site:.edu "ubb=postlist"
site:.edu "ubb=newpost&board=1"
site:.edu "ultrabb"
site:.edu "view_forum.php?id"
site:.edu "new_topic.php?"
site:.edu "login.php?register=1"
site:.edu "powered by vbulletin"
site:.edu "vbulletin/register.php"
site:.edu "/forumdisplay.php?f="
site:.edu "newreply.php?do=newreply"
site:.edu "newthread.php?do=newthread"
site:.edu "powered by bbpress"
site:.edu "bbpress/topic.php?id"
site:.edu "bbpress/register.php"
site:.edu "powered by the unclassified newsboard"
site:.edu "forum.php?req"
site:.edu "forum.php?req=register"
site:.edu "/unb/"
site:.edu "powered by usebb forum software"
site:.edu "/usebb/"
site:.edu "topic.php?id"
site:.edu "panel.php?act=register"
site:.edu "a product of lussumo"
site:.edu "comments.php?discussionid="
site:.edu "/viscacha/"
site:.edu "forum.php?s="
site:.edu "powered by viscacha"
site:.edu "/viscacha/register.php"
site:.edu "/post?id="
site:.edu "post/printadd?forum"
site:.edu "community/index.php"
site:.edu "community/forum.php?"
site:.edu "community/register.php"
site:.edu "powered by xennobb"
site:.edu "hosted for free by zetaboards"
site:.edu "powered by yaf"
site:.edu "yaf_rules.aspx"
site:.edu "yaf_topics"
site:.edu "postmessage.aspx"
site:.edu "register.aspx"
site:.edu "post/?type"
site:.edu "action=display&thread"
site:.edu "index.php"
site:.edu "index.php?fid"