Make Money for Uploading Files - UP TO $10 Per Download
 

 

 

Are you still trying to make money online?

 

Blackhat buster The Blackhat BusterDo you think that making money on the Internet is too difficult? If you think that internet marketing is hard, don't worry, its not your fault. Mainly because the  'Gurus' have always kept you busy in reading hundreds of pages of crap ebooks and BS methods and software which have nothing to do with 'making money online'. Those e-books are always fully loaded with a lot of affiliate links! No one makes money from those guides and crap methods except the 'Gurus' themselves.

I won't say that I do not make money on the Internet, I have been doing this for years. But all of my methods are thoroughly tested and I use each one of them to make money before I offer it to my readers.


The Problem is that sometimes a marketer , myself included, will develop a form of "tunnel vision", whereas, they can only see in one direction.  The secret of making money on the Internet is to look at all options from all directions, and then use those angles for your advantage.  I will not publish a method unless it absolutely will make you money, and it is "Unique" !!

 

When I first started in Internet Marketing.  I too, was trying all the worthless things that the "GURU's" were telling me to do. I always think of my past when I kept myself busy in building unnecessary backlinks, making websites and blogs and spending thousands of hours in SEO, when there was no need of that kinda stuff at all !!

"PLEASE! Stop Wasting Your Efforts By
Filling your Time with this Crap!"


What you've done so far to make money or can I say what you've been taught so far?

Did this crap ever work for you?


  • "Write *UNIQUE* articles and submit them to 1000+ articles sites?"
  • "Write Press Release and you'll be in Top of Google Overnight?" OR
  • "Start doing blog commenting blindly and you'll get 50,000 visitors a month?"


You might have tried all of the above stuff….
……….but did it work for you even for a single time?
NO!!

Because your efforts were in the TOTALLY WRONG DIRECTION!

 

 

"Tell Me Honestly, Would You Like Spending Only
about an Hour A Day And Make $2000 In Just 5 Days?"

Yes, just spending only an hour a day, you can easily make $2000 in 5 days …..

… But it doesn't end here, if you can spend more time, then you can easily double or even triple your income.

You can keep making money with this method, FOREVER!

 

You'll NEVER need ANY special skills with this method. I give you complete training in this Guide and you can do it with Google and your own email account or even a free email account.

I will teach you:


- How and  where to look for the money (there are  tens of thousands…)
- How to contact  (a perfect template available)
- How to get the Cash directly into your PayPal or other Money Service account.

You can do this all day long and keep collecting cash with total ease. You can find hundreds of dollars daily, provide them with something that they are begging for, take the money and go ahead on to the next one.

I spend just an hour a day and I made (still making…) $2000 in just 5 DAYS!!!

First of all, this is a NEWBIE friendly method and second, you can apply this method, WITHOUT:

- Any website.
- Any SEO skills.
- Any subscribers list.

- Any Experience
- Any Software or script.
- Any Investment
- Any Offline marketing.

"This Is The Easiest Method
Of Making Money, Till Date!"

I tried many methods of making money online, and I have made a lot of money with those, but this one was the total reverse of what I had been doing. This is a totally unique and proven method of making money.

  •  You won't have to spend thousands of hours in SEOing your site.
  •   You won't have to spend thousands of hours in making backlinks and searching for traffic.
  •   You won't have to spend thousands of hours in making Clickbank products or distributing free reports just in the hope of making some money.

"You Don't Even Need Your Own Website!"
 
Most of the methods require that you should have your own website but under this method, you don't even need your own website.

That's why I call it a Totally Noobie friendly method!

 
"Sorry! Here Comes A Sad Part"
 
Definitely, this method is VERY POWERFUL and I was able to make $2000 in just 5 days (and still making money…),

BUT the sad part is that I am only going to keep this up for a limited time.

May be you plan to buy this Guide  tomorrow, or even next week,  but you might  wake up only to find that this IS GONE FOREVER!

…. Yes! I am only going to distribute  limited copies of this extremely powerful Guide!

 

Check out my Offer and Get It Free here !

Buy me an expresso and get Blackhat Buster instantly !!
button5 The Blackhat Buster

Quick cash recipe Quick Cash Recipe

Let's cut the crap, shall we?


Making Instant Money Online is Easy…


There, I said it.


You see, people tend to make this process too damn hard. They get caught up in all sorts of stuff – worry if they are doing something wrong, worry if they are picking the wrong niche, obsessing over Google and its silly SEO game, and a bunch other stuff…


All the while, tens of thousands of people are effortlessly making thousands of dollars online each and every month.


So instead of some stupid motivational story about me and my life and how I “fell” into IM and how I was broke and then became wealthy and all that silly  filler, let’s get right to the meat of it…

To make money online, all you need is this simple little recipe….


You Mix a little of this in and some of that, Set the Oven just right , time it right and PRESTO – Success !!

Of course a Good Cook is also only as good a their tools.  So, this little recipe will also give you just the right tools for success.

You have probably heard that before, right?  The problem is that no one really explains it the right way.

Simple is Easy.  Simple is Great. Well, then I made this SUPER SIMPLE…..If you can take four steps then you can make money………..

 

Let's ask a few questions to ourselves first:

  • Does your Job Suck?
  • Having Trouble even finding a Job?
  • Is your Boss and Idiot?
  • Are you Broke the day after payday?
  • Never have enough money to do the things you want?
  • Like your Job but, just want some extra cash?

Well, if you answered Yes to even one of these questions, than you need to get into the kitchen and Crank out this QUICK CASH RECIPE !!

 

Included:

All the right ingredients you will need for Success at making money.

Some special tools that will get the Cash in your Pocket QUICK !!

SIMPLE – Precise Guidelines to reap the benefits.

SPECIAL Ingredients to take you to the next level.

AND MORE…………..

 

And, I have already gotten a few emails from some of the people who I gave an advance copy to:

 

"Good God, BlackhatBuzz !!   This is so simple.  I don't know how many guides I read that confuse the hell out of me.  I put this into action and made over $400 in less than a week.  Thanks again."

Rob

 

"Hey BHB, I never knew cooking up Quick Cash was so easy and so much fun.  You gave just the right stuff to put into the soup."

Jenny

 

Put away the Mixer, toss the pots and pans, forget about what spices to use.  I am giving you super secret recipe that can make you 1000's of dollars every month.

Are you ready to start Cooking up Some CASH !!

 

Forget the Burger and Fries today for Lunch and Get into the Kitchen.

 

 

….Blackhat Twitter on Autopilot System….

 

BH twitter Blackhat Twitter Autopilot System


Yes…. I am going to Show you How to

How to Create a  Twitter Money Making System

That Runs on Complete AutoPilot

in a Few Easy Steps

Twit BH2 Blackhat Twitter Autopilot System

 

 

This step-by-step guide shows you How to make Serious Money using Twitter with a System that Runs on Autopilot. You can create your own Fully Automated Money Generating System(s) and make Cash from Lazer Targeted Twitter Traffic on a regular basis in any niche you can think of.

 

 

With this System, you will be able to  Create a Blackhat Twitter System that Runs on AutoPilot to make a substantial Income from Google Adsense™, (or) Clickbank™, (or) Amazon™, (or) Ebay®, (or) CPA Networks (or) any other Affiliate programs, (or) all of the above !!!  And Guess What ?  You can also use it to build a Fully Automated List Building System. You only need to spend around 15 to 25 minutes to create a Blackhat Twitter System that Runs on AutoPilot.

 

Twitter traffic flow Blackhat Twitter Autopilot System

 

The whole system will work full Autopilot – you won't have to do anything to run it. This is Literally a Set it  and Forget it system where the Money continues to Roll in . The Instruction guide is arranged in a no-nonsense – straight to the point manner with full details and screen-shots. When you are reading the Guide, you can open your Browser at the same time and follow each step as outlined in the Guide.

 

No programming knowledge is necessary. It is helpful to have  have basic knowledge about Twitter, Gmail, Adsense etc. and  be familiar with uploading files to a website.  But the Guide takes you through this anyhow, so totally NOOB friendly !!

You need only 4 things to create a Blackhat Twitter System that Runs on AutoPilot.

  •     The Step by Step Guide (Of Course – INCLUDED)
  •     A Firefox Browser connected to the Internet
  •     A special PHP Script that makes the Magic Work (Of Course – INCLUDED)
  •     15 to 25 minutes of your time

 

Once you create one Blackhat Twitter System that Runs on AutoPilot , you can start duplicating the whole thing again. This way, you can build a network of Fully Automated Twitter Money Machines. Just Rinse and repeat the  steps to make more money. The whole system is managed by the PHP Script provided with this Guide and some other  Tools available on the Internet.  Of Course I will walk you through it.

How this system works:

There are 2 Main Components in a Blackhat Twitter System that Runs on AutoPilot:

1)     A Self-updating Twitter Account


2)     A Self-updating Website

 

In a Self-updating Twitter account, tweets will be posted automatically in regular basis with links back to your self-updating website.

 

Script plan Blackhat Twitter Autopilot System

 

Every post in your Twitter account will be pointed to the Blackhat Twitter System that Runs on AutoPilot. So the traffic from all tweets will be directly sent to your Money Machine.    Blackhat Twitter System is a fully Template driven system, so you can easily modify it at any time by modifying a single template file and hundreds of pages will be updated automatically. For example; if you find a new Affiliate Offer and want to cash it, you can easily change the setting and all pages will be updated accordingly.

 

No dates present

 

Blackhat CPALead Guide

 

blackhat cpalead Blackhat CPA Lead Guide

Been trying to Make Money with CPA Lead?

Not Making the Income you would Like?

 

Well,  I am going to Give you SIX —-

Yes * 6 * proven methods that will bring in a steady income using CPALead !!

I even give you some bonus tools and Resources to help you on your way.

Yes – I spell it all out – Step-By-Step !!

 

 

Here is a Sample of what you may expect  …….

  • Cash in using FaceBook
  • The Money Rolls in Exploiting YouTube.
  • Using Google to Create Massive income.
  • Dominate the Forums for Quick Cash.
  • AND MORE …..

 

Download the Free Guide Here and Cash in With CPALead…..

 

free download(1) Blackhat CPA Lead Guide

 

First off I just want to tell you ALL something.

 

SEXY VIDEOS OR COMMENTING ON SEXY VIDEOS WILL GET YOU NO WHERE !!!!!  

STOP RIGHT THERE !!

You wouldn't fall for it so why would others? But here's a method YOU might actually fall for…

 

Oh, and did I mention this method won't get your YouTube account banned?

 

The Method:


SIMPLE – You will be commenting on popular music videos and claiming that a sex tape was released of the star. To make the video believable we need to associate the star with a scandal.

 

Here is a REAL EXAMPLE of a comment I used recently about Lady GaGa:

 

LOVE THIS SONG!!!   Poor Lady Gaga….first she is accused of being a man and now even a SEX TAPE of her got released. TMZ says the video can be viewed at:

"myaffiliatesite.com"

 

You will only post on the videos that appear on the first page when you type " Lady Gaga", the rest aren't worth it. Sort the videos by most viewed so the videos you post on will receive more traffic.


I commented on a full page of videos in 5 minutes. Copy and paste the same comment for that star and just keep clicking back and move onto the next video. The captchas are a pain in the Butt, but it doesn't slow down the process of commenting as much as you think.


Ok now for some reason my accounts that I post with NEVER get banned, I think because they actually look half legit and I'm switching often between different celebrities.

 

SOOOOO –  How to Monetize This Type of Traffic

 

1. Use a gateway like CPA Content Wizard or BlackhatCodebreaker – or you can use ZangoCash.

2. Sometimes I used to place a Google search bar and make it look like users would have to search for a porn video. Then they would click on the ads that came up for their search term.

3. Start up a blog or forum that has to do with celebrities and turn this traffic into returning visitors.

4. If you can find a reliable person , hire some guy for 2-3$ per hour and sell the traffic.

DATING OFFERS DO NOT CONVERT. Even the free ones , trust me I tried.

This is not something that is old and saturated or a method that doesn't work now, this is something I used just about  1 hour ago to get traffic. THIS WORKS. On average with 20 minutes of posting you can get  between 300 -1000 visitors per day.

I am monetizing this in a different way but that I can't give out – Just Yet, Still in the Development Stage. But Stay Tuned……

 

You know I always share the GOOD STUFF with Ya !

 

 

 

BETTER THAN ANY ARTICLE REWRITER

Are you using PLR articles or auto blogging to make money online with Adsense or CPA networks?

Are you getting no Money by the zillions of other marketers using the same content as you on their sites?

Are you Looking for the search engines to treat your articles as unique after you’ve used them on article directories?

Need a way to spin articles on your own site?

Now you can have Unique Content articles and PLR content as posts on blogs whose sole purpose is to make adsense earnings or CPA commissions.

 

 

What if we could stop the search engines seeing duplicate content in our posts…even if we used the same posts on different domain?

 

 

What if your articles were unique every single view?

 

 

What if the same content looked the same to the human eye but NOT to the search engines?

 

 

What if we could rank for the same content on many domains?

 

 

What if we could rule SERPS with the same article without resorting to using free sites like Squidoo and Hubpages where we run the risk of being banned and we were fighting for only 2 entries in Google on sites like Squidoo.

 

 

What if our content was viewed as unique even if another blogger was using the same content as you…even if that blogger is using the exact same article as you?

 

 

What if the content on your Autoblogs was unique to even the original content?

 

And what if this was achievable without 1 bit of content spinning needed. No more unreadable articles or blog posts.

 

Well there is a Solution !!

This  is a unique WordPresss plugin that achieves all of this with a 5 sec install on your blog.

Yep, totally unique content that’s readable in seconds. No rewriting, no copy and pasting, no spinning, no messing with code…just unique blog posts time after time.

 

What are the Benefits of this Plugin……

  • Duplicate Content is NON-EXISTANT
  • Rehash Articles in seconds so that the same content can be used over and over again.
  • Build multiply Blogs with the same Content in record time.
  • Get more traffic to your auto blogs from the SERPS.
  • ZERO Technical know how is needed.
  • You can even be 100% unique even if 1000's of bloggers are using the same content.
  • MAKE MORE MONEY from the increase in Traffic from Search Engines.

It's Simple……

NO EFFORT + NO DUPLICATE CONTENT = $$$$$

 

So what is this going to Cost ?

NOTHING – NADA – ZERO

Get it Below FREE………

Click HERE to Download

OR

 

JUST SEND 20 FRIENDS OR FAMILY TO OUR SITE

Below is your UNIQUE BONUS Link

Just send 20 Visitors to our Site Using your UNIQUE BONUS Link and the Download Link will be exposed at 100 %

 

OR, If you Need the "Eliminate Duplicate Content Plugin" right Now and you Don't have 20 Friends Yet ……

Click HERE

 

Hey BlackHatBuzzer’s

Want to get tons of Clickbank products for free? . . .

Then try this trick:

1) Click the Link below:

 Click here for FREE Clickbank Products

Here is the text from the Link

 

http://www.google.com/search?q=%22CLICKBANK%2FKEYNETICS+will+appear

%22&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

2)  Or Just Paste the text into your web browser’s ADDRESS BOX
(You know . . . the little white box at the top of your
browser window).

3) Hit Enter on your keyboard.

Now, You Can Get Page-After-Page of "FREE" Clickbank products

CBfree Get all the Clickbank Products you want for FREE

Or You can also try this:

Copy the text below and paste it into Google’s SEARCH BOX (and use the quotation marks):

"CLICKBANK/KEYNETICS will appear"

 

Or Here is another good one to try:

 CLICK HERE IF YOU DO NOT WANT TO COPY AND PASTE BELOW

 

"Your credit card or bank statement will show a charge by
ClickBank or CLKBANK*COM."

 

 

 

 

 

 

Good Luck Searching

BHBee Get all the Clickbank Products you want for FREE

 

 

Are you a Craigslist Marketer?

Here’s a Little Method to Make Some Extra Cash from Leads!

Read at your own risk !!

CRAIGSLIST BUSTED !!

Now most that are into CPA are using Craigslist in some way, shape or form to collect e-mail leads and then email them an offer. This is so you can get a little bit more personal, avoid looking like spam (which gets your ad flagged quickly) and lure the person in and get their interest peaked before you spring them with your offer. This  will show you how you can get an extra sale from a large number of your existing leads with a very little effort.

craigslist busted No Holds Barred Craigslist Busted MethodBasically the idea is that you are going to take existing leads – no matter what campaign you are running – and getting them to buy some anti virus software from you. This is not a new idea but it seems like a lot of people missed the boat and it brings  in quite a few extra sales. Anyhow, I will just show you exactly how to do this – You can run this with all your CL e-mailing campaigns.

So, to start with, you will run everything as normal, just send out your delayed auto responses with your sales pitch and make your sales on whatever product/offer you are running.   For the sake of making this clear to you,  Let’s pretend that we are running a campaign to promote a CPA credit offer. So We’ve shot off all of the emails to the leads we collected today, now we’re going to harvest and save these e-mails for the next round.

You can use Gmail accounts so why not show you to the free mail extractor to use, if you use Yahoo emails then a Google search will probably yield you a comparable solution. You use this extractor from Vallery.net and save all of the e-mails to your desktop; next we will manually log in and delete all of the mails in our account to continue with the credit offer campaign. Note it will limit the number of times you can extract, luckily with Gmail you can simply ad ‘+numbers’ to the end of your e-mail and it will appear to be a new one (as an example change your address of BlackhatBuzz@gmail.com to BlackhatBuzz+48@gmail.com and etc., ect., ect.).

Now, we will go and create a new e-mail address, if you can afford a domain this is better, because you will have  more reliability and make some additional sales. You will be posing as an anti-virus unit of Craigslist, so a domain such as ‘craigslistantifraud.org’ or something to make your e-mails seem more legitimate.  If not simply choose a good sounding account and make a new Gmail,  a hosted mail is highly recommend , as you are only spending like $5 for a method that will make that back with a great deal of profit on your first conversion.

Now wait about 3 days to send a mail ( continue to harvest mails daily and date them on your desktop to be sent out after 3 days) this gives the lead a chance to first complete the original offer that you targeted them for. It is time to send the mail now,  simply import the contacts list into your mail client (gmail will let you import via .CSV if you went the free route as well) and send out a message to all of the users/responders that goes something like this:

 

 

Dear Craigslist user,

Our records indicate that you may have been in contact with a website about a credit (replace with your offer) offer which was e-mailed to you as a follow up via a Craigslist advertisement on the 17th of September (todays date minus 3 days). Various reports indicate that this website is highly malicious in nature and has most probably infected your computer with a virus designed to steal and/or compromise your personal and financial data.

Since this virus is custom coded and very new most anti virus programs will not pick it up, the following three programs contain virus definitions that will detect and remove his malicious virus; it is highly recommended you use one of these programs to scan your computer:

Affiliate Link 1 | Affiliate Link 2 | Affiliate Link 3

 

Sincerely,

Craigslist Anti Virus & Fraud Department

 

Obviously the stuff in red is not placed within the ad but simply there for you to see where to change your information. In addition, the three links at the bottom will be to three different anti virus programs that run through ClickBank. Try to provide three choices instead of one so it looks less promotional – these offers are great because they even pay out recurring commissions in many instances and are usually more than a $30 payout to you.

This method works great and with only 5-10 extra minutes of work allows you to turn your previous leads into a second sale, often even yielding sales from leads who failed to generate on your initial e-mailing.

 

To your Success……

buzzy No Holds Barred Craigslist Busted Method

 

 

The VERY Black Hat AdSense with Google Trends Method

A couple of notes before we get to the Full Effect:

  • The highlighted parts don’t work properly in WP. If you cannot figure out the template on your own then just head over HERE and get the PDF version of this blog post which will lay out the site’s .php in color coded format to help you with editing if you are new to web design.
  • Be careful! I recommend not doing more than $20-$50 a day with this method. I taught this method to one of my coaching clients and he did $30K in about 2 weeks, needless to say he got banned. The key to going dark black hat like this is to fly under the radar, don’t bring attention to yourself and don’t get greedy! Also if you use your AdSense for a lot of white hat stuff consider running with AdBrite – they pay less but at least you aren’t risking your AdSense account.
  • I thought the traffic method makes perfect sense but apparently some people are not capable of comprehending this so let me lay it out. Your Digg.com submission uses the exact trend for title/description. It then ranks in Google, your page itself does not. If the trend is low competition (no big .coms like Fox/CNN/TMZ) then your Digg.com story will be one of the top results for the trend in a few hours and people will click it and then click through to your website.

 

It seems that more and more often people are dropping AdSensea revenue stream do to ever decreasing ad clicks and the fact that traffic is just worth more through affiliate sales or CPA offers. With every intention of being banned from AdSense and just playing around before dropping them completely I started a VERY black hat AdSense campaign which, my to my initial surprise, is still going strong 5 months later and getting me a paid a ton!

I will not lie; this is very against AdSense ToS so you do risk being banned. All that I can say is that I have not gotten banned and I get paid. If you are a bit faint of heart or perhaps using your AdSense account for white hat things and do not want to risk it that is 100% fine just check out other programs, who do not care so much about quality for their advertisers and do NO manual checking.

These networks include:

 

  •   BidVertiser
  •   MSN
  •   AdBrite
  •   YSM

All of these networks pay out slightly lower, on a cost per click basis, than Google AdSense but they do still pay and with this method you will still earn big!I can also attest to the fact that my friends who use this method have never been banned from AdSense either. As I said though, use it at your own risk and perhaps consider applying to another network so a ban will not hurt your current or future AdSense prospects.

Alright, now thatyou’ve made up your mind about what pay-per-click program you’re going to use how about we go ahead and get on to the method…

Set Up

The first step for setup here is start out a brand new site. You will need hosting for this, a domain is optional as you can always use a redirect/shortening service but you will need your own hosting as you are going to be editing some .php files. You can use 000webhost.com for free hosting but it does not
look clean so I recommend moving away from it once you get paid. Since Google Trends, and our traffic technique, are both very short lived you will be spending about 5-10 minutes each day setting up a new page based around a new trend. It gets more simple next time as you have your template and only have to edit the URL forthe new picture as well as put it the dimensions for it. This can be done on hosted sites only! Blogger.com/BlogSpot.com blogs will NOT work as you need to edit a .php page to make the code for this work!

Your very first step for the day is to head over to Google.com/trends and look for a suitable Google trend for the day. An ideal trend, for this method, will need to posses one of the following attributes:

Not covered by the major media networks such as Associated Press, CNN or Fox News. These networks will be the only things that will outrank you in Google, which is going to be your traffic source. Generally speaking about 1/3 of the top 100 trends fit this criterion and you can run with them!
The Google Trend needs to be about some sort of content that people are dying to get. This works with mainly two things –videos and pictures. For this method to work you will need your visitor to click on the picture, you will see why later.

In this example I am a bit limited today because it is the 4th of July so that takes up a ton of Google Trends and is not too useable to me. However, there are always at least a few Trends out there that I can work with. Today I went with ‘versus cycling’ which holds some great qualities; it is not covered by
the big networks, itis related to TV so I can put up a clip and it even has a related Trend a little further down the list which means the possibility to add some extra traffic (this method will let you target up to 2 Google Trends per blog post)! Here is a little shot for y’all:

Gtrends The VERY BlackHat Adsense Method

Now it is time to get a little content for my new page for the day.I will head over to Google Image search and just type in ‘versus cycling’. I found a suitable image in about a minute, anything that looks like it could be a frame from a video will do just fine. Here is the image that I went with:

bike1 The VERY BlackHat Adsense Method

Now I want to make it look like a video, this is pretty darn simple! All we are going to do is to stick a play button right on top of it. You don’t even need Photo Shop to make this look good; I’m doing it in MS Paint on my laptop as a matter of fact! I have found the perfect button for you and you can get it at:

http://i869.photobucket.com/albums/ab256/bomaj4/play.jpg

Just take that image and paste it right on top of the picture that you found. My final product, for my new webpage, is this bad boy:

bike2 The VERY BlackHat Adsense Method

Nothing too fancy but since you have to do this daily I prefer to only spend about 2 minutes total to slap up this picture! From here you can upload it straight to your server for use in your new web page. You may be thinking that it is pretty black hat toget a person to click on something that isn’t what you make it out to be. Well, you’ve been warned this is a VERY black hat method and you haven’t seen anything yet! I am about to show you how to get credited for an AdSense, or other network of your choosing, click every time someone clicks on that video!

Now it is time to generate your AdSense code. The big thing here is that you do NOT want a 100% CTR. Since Google made clicks only count on the text links and not the whole ad this helped a lot. Although you could easily still get 80%+ CTR it WILL get you banned as it initiates a manual review by Google!
Because of this you want to make your Ad Unit smaller than the picture size. What is going to happen is that this will make it so that people will have toclick the correct area of the picture –your fake video – in order to get an AdSense click. This way only a percentage of people clicking on the image will also be clicking your AdSense ad unit so you have a reasonable looking CTR.

So, it is now time to take this picture and hide it over your AdSense ads. In order to do this simply use the template that is on the following page. Create a new .txt document and paste all of it in there, then I will show you what needs to be changed to make it have your current image up and your AdSense ads
behind it. Also please note AdSense now uses a new format for ads, I use copied+pasted ad units from my old site so it looks like this. If you do the newer ones just stick that code where my AdSense code shows up.

Ok then, here it goes for you:

<style>

iframe{

opacity: 0;

border: 0px none transparent;

position: absolute;

top: 0px;

left: 0px;

height: 300px;

width: 250px;

filter:alpha(opacity=0);

}

img{

position: absolute;

top: -50px;

left: 0px;

height: 475px;

width: 342px;

}

</style>

</head>

<body>

<div>

<img src=”http://YOURDOMAINN.com/YOURIMAGE.gif”/>

<script type=”text/javascript”><!–

google_ad_client = “pub-YOURPUBID”;

google_ad_width = 300;

google_ad_height = 250;

google_ad_format = “300×250_as”;

google_ad_type = “image”;

google_ad_channel = “YOURCHANNEL”;

//–></script>

<script type=”text/javascript”

 

src=”http://pagead2.googlesyndication.com/pagead/show_ads.js”>

</script>

<center>Watch versus cycling in this video clip.<br>Versus Cycling is some

great videos to watch!

</div>

</body>

 

As you can see this is designed for a 300×250 ad unit. If you use a smaller ad unit (a must do if your picture is somewhat smaller) then just also change the dimensions to the dimensions of your ad unit.The areas highlighted in green reflect the height of width of your picture, be sure to change these to reflect whatever picture you use as well.If you are using a fairly large picture and do not need a smaller ad then simply change the parts. The first thing to
change is to add the URL of the image that you created earlier (in the img src area). Next just replace the Google AdSense code area to reflect the code that you have generated in your account.

As you know AdSense displays ads based on keywords so if there is no text here you will be showing VERY low paying public service ads. You will also note that I put a <br> in between sentences. Keep it short and keep that after the first sentence or else it will get moved behind your image and look a bit fishy. If you are using a different advertiser such as AdBrite then obviously you would place the AdBrite code in place of the AdSense code at the bottom.

This is now the end of your page set up. So to cover things, we have identified a Google Trend and built a page that will get all the interested traffic clicking on ads. Now, aside from my very handy promotion trick, all that is left is to create your ‘safe’ page to keep a manual review from slamming you –just in case!

We are going to go and make a new .txt document in wordpad. This  will be extremely generic and safe, just slap in the following code:

<head>

<style>

</style>

</head>

<center><script type=”text/javascript”><!–

google_ad_client = “pub-YOURPUBID”;

google_ad_width = 300;

google_ad_height = 250;

google_ad_format = “300×250_as”;

google_ad_type = “image”;

google_ad_channel = “YOURCHANNEL”;

//–></script>

<script type=”text/javascript”

 

src=”http://pagead2.googlesyndication.com/pagead/show_ads.js”>

</script></center>

<br>

<center>Versus Cycling News</center><br>

 

A fire that destroyed Team Type 1’s rider transport van threatened to burn up the team’s lead Sunday morning in the Arizona desert at the Race Across America (RAAM). But quick action by the crew kept the eight-rider team going and Team Type 1 led by more than an hour Sunday afternoon at Time Station No. 7 in Cottonwood, Ariz., 437 miles (704 km) into the race.

 

The fire started beneath the mini-van that hauls the riders and pulls the trailer carrying their Orbea bicycles. It happened when the van’s hot catalytic converter came in contact with tall grass on the side of the road while the vehicle was pulled off to make a rider exchange. Fortunately, no one was hurt. But the van was permanently damaged and a large area of the pavement was scorched.

 

Team Type 1 General Manager Tom Schuler, who is serving as a crew member for the team during RAAM, said he was amazing by how fast riders and staff responded to the situation.   “They had to put out the fire, disconnect the trailer hitch, reconnect the trailer to another vehicle,

move the bikes around and get a new vehicle,” Schuler said. “We were really fortunate that we were able to disconnect that trailer from the frame of the burned-out vehicle and move it onto a different van.”

 

As a temporary stop-gap, Team Type 1 and Team Type 2 RAAM manager Dave Eldridge put his utility van into action as the rider vehicle until he was able to secure a new rental. Eldridge is the father of Joe Eldridge, who co-founded Team Type 1 with Phil Southerland in 2004.   In the wake of the fire, RAAM organizers put out a message to the more than 70 other crews following teams or individuals in the transcontinental race. It read, in part:

RAAM wish to extend their appreciation for the quick reaction of the crew to mitigate what could have been a very dangerous event. RAAM is unable to control all aspect of the Race and therefore they cannot take responsibility for unfortunate events that may occur during the Race.

 

Please be aware of your surroundings (especially the dry ground cover inCalifornia) and the impact you may have on them during the race. Follow the progress of Team Type 1 and Team Type 2 during RAAM by going to teamtype1.org. Both squads are providing inspiration to people affected by diabetes around the world.

Now what you will have to change is simple:

Just Google the Trend you chose and grab a short article. Copy and paste it in there and you have your ‘safe’ page ready to throw up. When you start to get a good burst of ad clicks simply throw up the safe page after you earned your goal for the day ($100? $300?) and the ‘safe’ page will replace your old one. This is dual purpose, first of all it will now greatly lower your CTR because you have the same ad up but no sneaky technique to get it clicked. Second this method will also give you a ‘safe’ page so that by the time an AdSense rep comes to manually check it, if you happened to go a bit overboard, everything will look 100% legit to them!

That’s it for the site creation. Just remember, watch it closely and throw this up once you have a comfortable amount of cash in for the day. Even if you do not hit your goal for the day still throw this up early so that you can lower your CTR and avoid suspicion.

Now all that is left to do is to get some traffic. Simply head over to Digg and submit your new site. In the title box put your first Trend keyword exactly as it appears, in this instance it will be ‘versus cycling’. In the second box put your second Trend exactly as it appears also. Although this is optional as you will often only have 1 Trend at a time that fits well. Also put a little link bait at the end of this in the description, after all you do want people to click through.

As an example my title now reads: Versus Cycling

And my description reads: Versus TV –Watch versus cycling here, great clip of today’s action!

Now just go ahead and submit it. The way that this trick works is that Digg.com is LOVED by Google, at least short term. So, since you are not competing with CNN and Fox News your Digg.com story (not the actual site but the Digg submission) will hit the top of Google in about an hour for your Trend and
remain there for 12-24 hours. This way people searching for this Trend will click on your Digg.com story and the click through the link there which will lead them to the page that you setup. You now have a TON of traffic ready to click your invisible AdSense ad!

Enjoy it and please do not get greedy and forget to throw your ‘safe’ page up. It really is some great money so be sure to do this and milk it long term instead of getting banned!

 

If you would like the Ebook explaining this Method in Detail

CLICK HERE to Download

 

 

 

 

Are YOU a Resale Rights Junkie?


I'll bet you have a potential goldmine of Private Label, Resale and Master Resale Rights products tucked away on your hard-drive that you've collected from giveaway events, firesales and membership sites.

Would you like to know how you can quickly and easily turn those products into cold hard cash?

 

If You've Never Heard of TradeBit…

Tradebit page Making Money with TradebitTradeBit is a digital product marketplace and the more I work with the site, the more excited I get about the potential and the more ideas I come up with for ways to use it. And, you will, too!

TradeBit has been online for over 4 years and their Alexa traffic rank is 3,998.

According Alexa's Clickstream data, 4.5% of their visitors go to PayPal when they leave the TradeBit site, which tells me that at least 4.5% of the people who visit the site end up purchasing something (other payment options on TradeBit are Google Checkout, PayDotCom and ClickBank).

Wouldn't you like the chance to capture some of those buyers?

The on-site Search Engine Optimization (SEO) and the authority of the site is apparently really, really great! The first audio book I listed for sale on the TradeBit site landed on the first page of Google for it's primary keyword, out of 51.5 million results, in less than 2 days!

And, I made my first sale from TradeBit in less than a week, even tho' I'd done nothing to promote the product other than list it on TradeBit!

TradeBit has two different ways you can sell thru their marketplace. One way is to let them host your product pages, store your download files, handle all the transactions and deliver the downloads for you.

The other way to use TradeBit is a “self-hosted” option where you list products for sale on the site, but the links redirect visitors to your own “self-hosted” salespage.

The "How To Sell Digital Products On TradeBit" course focuses on listing your products directly on TradeBit and letting them handle the transactions and delivery.

TradeBit is not just for Resell Rights and Private Label Rights products, oh no! You can list and sell pretty much any downloadable digital product on the site, as long as you have the rights to sell it.

You'll also get your own subdomain “store” on their site. You don't pay any hosting fees, listing fees or transaction fees for your “store.”

It doesn't cost anything to register for a TradeBit account, list products for sale or host your files on the site. Like I said before, TradeBit handles all the transactions and download delivery.

You don't pay TradeBit anything until you sell something!

When you list your products directly on TradeBit, you'll be charged a 25% commission and TradeBit sends you 75% of the revenue generated from the sales of your products each Wednesday to your PayPal account.

TradeBit is a great way to get your digital products up on the web so they can begin making you money!

 

Get the Complete Course Here……..

 

Click HERE for All PDF's and Instructional Videos

 

THIS IS BRAND NEW – NEVER SEEN BEFORE

Get unlimited $100 adword vouchers from Google!!

adwords voucher2 Get unlimited adword vouchers from Google!!

This is extremely simple method and you can use it to get any amount of adwords vouchers you want.

It doesn’t require any investment from you.

You can use these vouchers for yourself (or) you can sell them if you want.


Get the secret exclusively Here at BlackhatBUZZ

To your Success,

BHB signature2(1) Get unlimited adword vouchers from Google!!

 

CLICK HERE to Download the Secret

 

 

 

 

If you have spent any significant amount of time online, you have likely come across the term  Black Hat at one time or another.

This term is usually associated with many negative comments. This Article is here to address those comments and provide some insight into the real life of a Black Hat SEO professional.  I’ve been involved in internet marketing for close to 10 years now, the last 7 of which have been dedicated to Black Hat SEO. As we will discuss shortly, you can’t be a great Black Hat without first becoming a great White Hat marketer. With the formalities out of the way, lets get into the meat of things, shall we?

 

What is Black Hat SEO?

The million dollar question that everyone has an opinion on. What exactly is Black Hat SEO?

seo white black hat(2) Crash Course in BlackHat SEOThe answer here depends largely on who you ask. Ask most White Hats and they immediately quote the Google Webmaster Guidelines like a bunch of lemmings. Have you ever really stopped to think about it though? Google publishes those guidelines because they know as well as you and I that they have no way of detecting or preventing what they preach so loudly. They rely on droves of webmasters to blindly repeat everything they say because they are an internet powerhouse and they have everyone brainwashed into believing anything they tell them. This is actually a good thing though. It means that the vast majority of internet marketers and SEO professionals are completely blind to the vast array of tools at their disposal that not only increase traffic to their sites, but also make us all millions in revenue every year.

The second argument you are likely to hear is the age old ,“the search engines will ban your sites if you use Black Hat techniques”. Sure, this is true if you have no understanding of the basic principals or practices. If you jump in with no knowledge you are going to fail. I’ll give you the secret though. Ready? Don’t use black hat techniques on your White Hat domains. Not directly at least. You aren’t going to build doorway or cloaked pages on your money site, that would be idiotic. Instead you buy several throw away domains, build your doorways on those and cloak/redirect the traffic to your money sites. You lose a doorway domain, who cares? Build 10 to replace it. It isn’t rocket science, just common sense. A search engine can’t possibly penalize you for outside influences that are beyond your control. They can’t penalize you for incoming links, nor can they penalize you for sending traffic to your domain from other doorway pages outside of that domain. If they could, I would simply point doorway pages and spam links at my competitors to knock them out of the SERPS. See….. Common sense.

 

So again, what is Black Hat SEO? In my opinion, Black Hat SEO and White Hat SEO are almost no different. White hat web masters spend time carefully finding link partners to increase rankings for their keywords, Black Hats do the same thing, but we write automated scripts to do it while we sleep. White hat SEO’s spend months perfecting the on page SEO of their sites for maximum rankings, black hat SEO’s use content generators to spit out thousands of generated pages to see which version works best. Are you starting to see a pattern here? You should, Black Hat SEO and White Hat SEO are one in the same with one key difference. Black Hats are lazy. We like things automated. Have you ever heard the phrase "Work smarter not harder?" We live by those words. Why spend weeks or months building pages only to have Google slap them down with some obscure penalty.

If you have spent any time on web master forums you have heard that story time and time again. A web master plays by the rules, does nothing outwardly wrong or evil, yet their site is completely gone from the SERPS (Search Engine Results Pages) one morning for no apparent reason. It’s frustrating, we’ve all been there. Months of work gone and nothing to show for it. I got tired of it as I am sure you are. That’s when it came to me. Who elected the search engines the "internet police"? I certainly didn’t, so why play by their rules? In the following pages I’m going to show you why the search engines rules make no sense, and further I’m going to discuss how you can use that information to your advantage.

Search Engine 101

As we discussed earlier, every good Black Hat must be a solid White Hat. So, lets start with the fundamentals. This section is going to get technical as we discuss how search engines work and delve into ways to exploit those inner workings. Lets get started, shall we?

Search engines match queries against an index that they create. The index consists of the words in each document, plus pointers to their locations within the documents. This is called an inverted file. A search engine or IR (Information Retrieval) system comprises four essential modules:

A document processor

A query processor

A search and matching function

A ranking capability

While users focus on "search," the search and matching function is only one of the four modules. Each of these four modules may cause the expected or unexpected results that consumers get when they use a search engine.

Document Processor

The document processor prepares, processes, and inputs the documents, pages, or sites that users search against. The document processor performs some or all of the following steps:

Normalizes the document stream to a predefined format.

Breaks the document stream into desired retrievable units.

Isolates and meta tags sub document pieces.

Identifies potential indexable elements in documents.

Deletes stop words.

Stems terms.

Extracts index entries.

Computes weights.

Creates and updates the main inverted file against which the search engine searches in order to match queries to documents.

 

The document processor extracts the remaining entries from the original document. For example, the following paragraph shows the full text sent to a search engine for processing:

Milosevic’s comments, carried by the official news agency Tanjug, cast doubt over the governments at the talks, which the international community has called to try to prevent an all-out war in the Serbian province. "President Milosevic said it was well known that Serbia and Yugoslavia were firmly committed to resolving problems in Kosovo, which is an integral part of Serbia, peacefully in Serbia with the participation of the representatives of all ethnic communities," Tanjug said. Milosevic was speaking during a meeting with British Foreign Secretary Robin Cook, who delivered an ultimatum to attend negotiations in a week’s time on an autonomy proposal for Kosovo with ethnic Albanian leaders from the province. Cook earlier told a conference that Milosevic had agreed to study the proposal.

 

To reduce this text for searching  the following:

Milosevic comm carri offic new agen Tanjug cast doubt govern talk interna commun call try prevent all-out war Serb province President Milosevic said well known Serbia Yugoslavia firm commit resolv problem Kosovo integr part Serbia peace Serbia particip representa ethnic commun Tanjug said Milosevic speak meeti British Foreign Secretary Robin Cook deliver ultimat attend negoti week time autonomy propos Kosovo ethnic Alban lead province Cook earl told conference Milosevic agree study propos.

The output  is then inserted and stored in an inverted file that lists the index entries and an indication of their position and frequency of occurrence. The specific nature of the index entries, however, will vary based on the decision in Step 4 concerning what constitutes an "indexable term." More sophisticated document processors will have phrase recognizers, as well as Named Entity recognizers and Categorizers, to insure index entries such as Milosevic are tagged as a Person and entries such as Yugoslavia and Serbia as Countries.

Term weight assignment. Weights are assigned to terms in the index file. The simplest of search engines just assign a binary weight: 1 for presence and 0 for absence. The more sophisticated the search engine, the more complex the weighting scheme. Measuring the frequency of occurrence of a term in the document creates more sophisticated weighting, with length-normalization of frequencies still more sophisticated. Extensive experience in information retrieval research over many years has clearly demonstrated that the optimal weighting comes from use of "tf/idf." This algorithm measures the frequency of occurrence of each term within a document. Then it compares that frequency against the frequency of occurrence in the entire database.

Not all terms are good "discriminators" — that is, all terms do not single out one document from another very well. A simple example would be the word "the." This word appears in too many documents to help distinguish one from another. A less obvious example would be the word "antibiotic." In a sports database when we compare each document to the database as a whole, the term "antibiotic" would probably be a good discriminator among documents, and therefore would be assigned a high weight. Conversely, in a database devoted to health or medicine, "antibiotic" would probably be a poor discriminator, since it occurs very often. The TF/IDF weighting scheme assigns higher weights to those terms that really distinguish one document from the others.

Query Processor

Query processing has seven possible steps, though a system can cut these steps short and proceed to match the query to the inverted file at any of a number of places during the processing. Document processing shares many steps with query processing. More steps and more documents make the process more expensive for processing in terms of computational resources and responsiveness. However, the longer the wait for results, the higher the quality of results. Thus, search system designers must choose what is most important to their users — time or quality. Publicly available search engines usually choose time over very high quality, having too many documents to search against.

The steps in query processing are as follows (with the option to stop processing and start matching indicated as "Matcher"):

At this point, a search engine may take the list of query terms and search them against the inverted file. In fact, this is the point at which the majority of publicly available search engines perform the search.

Tokenize query terms.

Recognize query terms vs. special operators.

————————> Matcher

Delete stop words.

Stem words.

Create query representation.

————————> Matcher

Expand query terms.

Compute weights.

– — – — – — – –> Matcher

 

Step 1: Tokenizing. As soon as a user inputs a query, the search engine — whether a keyword-based system or a full natural language processing (NLP) system — must tokenize the query stream, i.e., break it down into understandable segments. Usually a token is defined as an alpha-numeric string that occurs between white space and/or punctuation.

Step 2: Parsing. Since users may employ special operators in their query, including Boolean, adjacency, or proximity operators, the system needs to parse the query first into query terms and operators. These operators may occur in the form of reserved punctuation (e.g., quotation marks) or reserved terms in specialized format (e.g., AND, OR). In the case of an NLP system, the query processor will recognize the operators implicitly in the language used no matter how the operators might be expressed (e.g., prepositions, conjunctions, ordering).

Steps 3 and 4: Stop list and stemming. Some search engines will go further and stop-list and stem the query, similar to the processes described above in the Document Processor section. The stop list might also contain words from commonly occurring querying phrases, such as, "I’d like information about." However, since most publicly available search engines encourage very short queries, as evidenced in the size of query window provided, the engines may drop these two steps.

Step 5: Creating the query. How each particular search engine creates a query representation depends on how the system does its matching. If a statistically based matcher is used, then the query must match the statistical representations of the documents in the system. Good statistical queries should contain many synonyms and other terms in order to create a full representation. If a Boolean matcher is utilized, then the system must create logical sets of the terms connected by AND, OR, or NOT.

An NLP system will recognize single terms, phrases, and Named Entities. If it uses any Boolean logic, it will also recognize the logical operators from Step 2 and create a representation containing logical

sets of the terms to be AND’d, OR’d, or NOT’d.

At this point, a search engine may take the query representation and perform the search against the inverted file. More advanced search engines may take two further steps.

Step 6: Query expansion. Since users of search engines usually include only a single statement of their information needs in a query, it becomes highly probable that the information they need may be expressed using synonyms, rather than the exact query terms, in the documents which the search engine searches against. Therefore, more sophisticated systems may expand the query into all possible synonymous terms and perhaps even broader and narrower terms.

This process approaches what search intermediaries did for end users in the earlier days of commercial search systems. Back then, intermediaries might have used the same controlled vocabulary or thesaurus used by the indexers who assigned subject descriptors to documents. Today, resources such as WordNet are generally available, or specialized expansion facilities may take the initial query and enlarge it by adding associated vocabulary.

Step 7: Query term weighting (assuming more than one query term). The final step in query processing involves computing weights for the terms in the query. Sometimes the user controls this step by indicating either how much to weight each term or simply which term or concept in the query matters most and must appear in each retrieved document to ensure relevance.

Leaving the weighting up to the user is not common, because research has shown that users are not particularly good at determining the relative importance of terms in their queries. They can’t make this determination for several reasons. First, they don’t know what else exists in the database, and document terms are weighted by being compared to the database as a whole. Second, most users seek information about an unfamiliar subject, so they may not know the correct terminology.

Few search engines implement system-based query weighting, but some do an implicit weighting by treating the first term(s) in a query as having higher significance. The engines use this information to provide a list of documents/pages to the user.

After this final step, the expanded, weighted query is searched against the inverted file of documents.

 

Search and Matching Function

How systems carry out their search and matching functions differs according to which theoretical model of information retrieval underlies the system’s design philosophy. Since making the distinctions between these models goes far beyond the goals of this article, we will only make some broad generalizations in the following description of the search and matching function.

Searching the inverted file for documents meeting the query requirements, referred to simply as "matching," is typically a standard binary search, no matter whether the search ends after the first two, five, or all seven steps of query processing. While the computational processing required for simple, unweighted, non-Boolean query matching is far simpler than when the model is an NLP-based query within a weighted, Boolean model, it also follows that the simpler the document representation, the query representation, and the matching algorithm, the less relevant the results, except for very simple queries, such as one-word, non-ambiguous queries seeking the most generally known information.

Having determined which subset of documents or pages matches the query requirements to some degree, a similarity score is computed between the query and each document/page based on the scoring algorithm used by the system. Scoring algorithms rankings are based on the presence/absence of query term(s), term frequency, tf/idf, Boolean logic fulfillment, or query term weights. Some search engines use scoring algorithms not based on document contents, but rather, on relations among documents or past retrieval history of documents/pages.

After computing the similarity of each document in the subset of documents, the system presents an ordered list to the user. The sophistication of the ordering of the documents again depends on the model the system uses, as well as the richness of the document and query weighting mechanisms. For example, search engines that only require the presence of any alpha-numeric string from the query occurring anywhere, in any order, in a document would produce a very different ranking than one by a search engine that performed linguistically correct phrasing for both document and query representation and that utilized the proven tf/idf weighting scheme.

However the search engine determines rank, the ranked results list goes to the user, who can then simply click and follow the system’s internal pointers to the selected document/page.

More sophisticated systems will go even further at this stage and allow the user to provide some relevance feedback or to modify their query based on the results they have seen. If either of these are available, the system will then adjust its query representation to reflect this value-added feedback and re-run the search with the improved query to produce either a new set of documents or a simple re-ranking of documents from the initial search.

What Document Features Make a Good Match to a Query

We have discussed how search engines work, but what features of a query make for good matches? Let’s look at the key features and consider some pros and cons of their utility in helping to retrieve a good representation of documents/pages.

Term frequency: How frequently a query term appears in a document is one of the most obvious ways of determining a document’s relevance to a query. While most often true, several situations can undermine this premise. First, many words have multiple meanings — they are polysemous. Think of words like "pool" or "fire." Many of the non-relevant documents presented to users result from matching the right word, but with the wrong meaning.

Also, in a collection of documents in a particular domain, such as education, common query terms such as "education" or "teaching" are so common and occur so frequently that an engine’s ability to distinguish the relevant from the non-relevant in a collection declines sharply. Search engines that don’t use a tf/idf weighting algorithm do not appropriately down-weight the overly frequent terms, nor are higher weights assigned to appropriate distinguishing (and less frequently-occurring) terms, e.g., "early-childhood."

Location of terms: Many search engines give preference to words found in the title or lead paragraph or in the meta data of a document. Some studies show that the location — in which a term occurs in a document or on a page — indicates its significance to the document. Terms occurring in the title of a document or page that match a query term are therefore frequently weighted more heavily than terms occurring in the body of the document. Similarly, query terms occurring in section headings or the first paragraph of a document may be more likely to be relevant.

those referred to by many other pages, or have a high number of "in-links"

Popularity: Google and several other search engines add popularity to link analysis to help determine the relevance or value of pages. Popularity utilizes data on the frequency with which a page is chosen by all users as a means of predicting relevance. While popularity is a good indicator at times, it assumes that the underlying information need remains the same.

Date of Publication: Some search engines assume that the more recent the information is, the more likely that it will be useful or relevant to the user. The engines therefore present results beginning with the most recent to the less current.

Length: While length per se does not necessarily predict relevance, it is a factor when used to compute the relative merit of similar pages. So, in a choice between two documents both containing the same query terms, the document that contains a proportionately higher occurrence of the term relative to the length of the document is assumed more likely to be relevant.

Proximity of query terms: When the terms in a query occur near to each other within a document, it is more likely that the document is relevant to the query than if the terms occur at greater distance. While some search engines do not recognize phrases per se in queries, some search engines clearly rank documents in results higher if the query terms occur adjacent to one another or in closer proximity, as compared to documents in which the terms occur at a distance.

Proper nouns sometimes have higher weights, since so many searches are performed on people, places, or things. While this may be useful, if the search engine assumes that you are searching for a name instead of the same word as a normal everyday term, then the search results may be peculiarly skewed. Imagine getting information on "Madonna," the rock star, when you were looking for pictures of Madonnas for an art history class.

Summary

Now that we have covered how a search engine works, we can discuss methods to take advantage of them. Lets start with content. As you saw in the above pages, search engines are simple test parsers. They take a series of words and try to reduce them to their core meaning. They can’t understand text, nor do they have the capability of discerning between grammatically correct text and complete gibberish. This of course will change over time as search engines evolve and the cost of hardware falls, but we black hats will evolve as well always aiming to stay at least one step ahead. Lets discuss the basics of generating content as well as some software used to do so, but first, we need to understand duplicate content. A widely passed around myth on web master forums is that duplicate content is viewed by search engines as a percentage. As long as you stay below the threshold, you pass by penalty free. It’s a nice thought, it’s just too bad that it is completely wrong.

Duplicate Content

I’ve read seemingly hundreds of forum posts discussing duplicate content, none of which gave the full picture, leaving me with more questions than answers. I decided to spend some time doing research to find out exactly what goes on behind the scenes. Here is what I have discovered.

Most people are under the assumption that duplicate content is looked at on the page level when in fact it is far more complex than that. Simply saying that “by changing 25 percent of the text on a page it is no longer duplicate content” is not a true or accurate statement. Lets examine why that is.

To gain some understanding we need to take a look at the k-shingle algorithm that may or may not be in use by the major search engines (my money is that it is in use). I’ve seen the following used as an example so lets use it here as well.

Let’s suppose that you have a page that contains the following text:

The swift brown fox jumped over the lazy dog.

Before we get to this point the search engine has already stripped all tags and HTML from the page leaving just this plain text behind for us to take a look at.

The shingling algorithm essentially finds word groups within a body of text in order to determine the uniqueness of the text. The first thing they do is strip out all stop words like and, the, of, to. They also strip out all fill words, leaving us only with action words which are considered the core of the content. Once this is done the following “shingles” are created from the above text. (I’m going to include the stop words for simplicity)

The swift brown fox

swift brown fox jumped

brown fox jumped over

fox jumped over the

jumped over the lazy

over the lazy dog

These are essentially like unique fingerprints that identify this block of text. The search engine can now compare this “fingerprint” to other pages in an attempt to find duplicate content. As duplicates are found a “duplicate content” score is assigned to the page. If too many “fingerprints” match other documents the score becomes high enough that the search engines flag the page as duplicate content thus sending it to supplemental hell or worse deleting it from their index completely.

My old lady swears that she saw the lazy dog jump over the swift brown fox.

The above gives us the following shingles:

my old lady swears

old lady swears that

lady swears that she

swears that she saw

that she saw the

she saw the lazy

saw the lazy dog

the lazy dog jump

lazy dog jump over

dog jump over the

jump over the swift

over the swift brown

the swift brown fox

Comparing these two sets of shingles we can see that only one matches (”the swift brown fox“). Thus it is unlikely that these two documents are duplicates of one another. No one but Google knows what the percentage match must be for these two documents to be considered duplicates, but some thorough testing would sure narrow it down ;).

So what can we take away from the above examples? First and foremost we quickly begin to realize that duplicate content is far more difficult than saying “document A and document B are 50 percent similar”. Second we can see that people adding “stop words” and “filler words” to avoid duplicate content are largely wasting their time. It’s the “action” words that should be the focus. Changing action words without altering the meaning of a body of text may very well be enough to get past these algorithms. Then again there may be other mechanisms at work that we can’t yet see rendering that impossible as well. I suggest experimenting and finding what works for you in your situation.

The last paragraph here is the real important part when generating content. You can’t simply add generic stop words here and there and expect to fool anyone. Remember, we’re dealing with a computer algorithm here, not some supernatural power. Everything you do should be from the standpoint of a scientist. Think through every decision using logic and reasoning. There is no magic involved in SEO, just raw data and numbers. Always split test and perform controlled experiments.

What Makes A Good Content Generator?

Now we understand how a search engine parses documents on the web, we also understand the intricacies of duplicate content and what it takes to avoid it. Now it is time to check out some basic content generation techniques.

One of the more commonly used text spinners is known as Markov. Markov isn’t actually intended for content generation, it’s actually something called a Markov Chain which was developed by mathematician Andrey Markov. The algorithm takes each word in a body of content and changes the order based on the algorithm. This produces largely unique text, but it’s also typically VERY unreadable. The quality of the output really depends on the quality of the input. The other issue with Markov is the fact that it will likely never pass a human review for readability. If you don’t shuffle the Markov chains enough you also run into duplicate content issues because of the nature of shingling as discussed earlier. Some people may be able to get around this by replacing words in the content with synonyms. I personally stopped using Markov back in 2006 or 2007 after developing my own proprietary content engine. Some popular software that uses Markov chains include

RSSGM

and

YAGC

both of which are pretty old and outdated at this point. They are worth taking a look at just to understand the fundamentals, but there are FAR better packages out there.

So, we’ve talked about the old methods of doing things, but this isn’t 1999, you can’t fool the search engines by simply repeating a keyword over and over in the body of your pages (I wish it were still that easy). So what works today? Now and in the future, LSI is becoming more and more important. LSI stands for Latent Semantic Indexing. It sounds complicated, but it really isn’t. LSI is basically just a process by which a search engine can infer the meaning of a page based on the content of that page. For example, lets say they index a page and find words like atomic bomb, Manhattan Project, Germany, and Theory of Relativity. The idea is that the search engine can process those words, find relational data and determine that the page is about Albert Einstein. So, ranking for a keyword phrase is no longer as simple as having content that talks about and repeats the target keyword phrase over and over like the good old days. Now we need to make sure we have other key phrases that the search engine thinks are related to the main key phrase.

So if Markov is easy to detect and LSI is starting to become more important, which software works, and which doesn’t?

Software

Fantomaster Shadowmaker: This is probably one of the oldest and most commonly known high end cloaking packages being sold. It’s also one of the most out of date. For $3,000.00 you basically get a clunky outdated interface for slowly building HTML pages. I know, I’m being harsh, but I was really let down by this software. The content engine doesn’t do anything to address LSI. It simply splices unrelated sentences together from random sources while tossing in your keyword randomly. Unless things change drastically I would avoid this one.

SEC (Search Engine Cloaker): Another well known paid script. This one is of good quality and with work does provide results. The content engine is mostly manual making you build sentences which are then mixed together for your content. If you understand SEO and have the time to dedicate to creating the content, the pages built last a long time. I do have two complaints. The software is SLOW. It takes days just to setup a few decent pages. That in itself isn’t very black hat. Remember, we’re lazy! The other gripe is the ip cloaking. Their ip list is terribly out of date only containing a couple thousand ip’s as of this writing.

 
SSEC

or

Simplified Search Engine Content

This is one of the best IP delivery systems on the market. Their ip list is updated daily and contains close to 30,000 ip’s. The member only forums are the best in the industry. The subscription is worth it just for the information contained there. The content engine is also top notch. It’s flexible, so you can chose to use their proprietary scraped content system which automatically scrapes search engines for your content, or you can use custom content similar in fashion to SEC above, but faster. You can also mix and match the content sources giving you the ultimate in control. This is the only software as of this writing that takes LSI into account directly from within the content engine. This is also the fastest page builder I have come across. You can easily put together several thousand sites each with hundreds of pages of content in just a few hours. Support is top notch, and the knowledgeable staff really knows what they are talking about. This one gets a gold star from me.

BlogSolution: Sold as an automated blog builder, BlogSolution falls short in almost every important area. The blogs created are not wordpress blogs, but rather a proprietary blog software specifically written for BlogSolution. This “feature” means your blogs stand out like a sore thumb in the eyes of the search engines. They don’t blend in at all leaving footprints all over the place. The licensing limits you to 100 blogs which basically means you can’t build enough to make any decent amount of money. The content engine is a joke as well using rss feeds and leaving you with a bunch of easy to detect duplicate content blogs that rank for nothing.

Blog Cloaker

Another solid offering from the guys that developed SSEC. This is the natural evolution of that software. This mass site builder is based around wordpress blogs. This software is the best in the industry hands down. The interface has the feel of a system developed by real professionals. You have the same content options seen in SSEC, but with several different redirection types including header redirection, JavaScript, meta refresh, and even iframe. This again is an ip cloaking solution with the same industry leading ip list as SSEC. The monthly subscription may seem daunting at first, but the price of admission is worth every penny if you are serious about making money in this industry. It literally does not get any better than this.

Cloaking

So what is cloaking? Cloaking is simply showing different content to different people based on different criteria. Cloaking automatically gets a bad reputation, but that is based mostly on ignorance of how it works. There are many legitimate reasons to Cloak pages. In fact, even Google cloaks. Have you ever visited a web site with your cell phone and been automatically directed to the mobile version of the site? Guess what, that’s cloaking. How about web pages that automatically show you information based on your location? Guess what, that’s cloaking. So, based on that, we can break cloaking down into two main categories, user agent cloaking and ip based cloaking.

User Agent cloaking is simply a method of showing different pages or different content to visitors based on the user agent string they visit the site with. A user agent is simply an identifier that every web browser and search engine spider sends to a web server when they connect to a page. Above we used the example of a mobile phone. A Nokia cell phone for example will have a user agent similar to: User-Agent: Mozilla/5.0 (SymbianOS/9.1; U; [en]; Series60/3.0 NokiaE60/4.06.0) AppleWebKit/413 (KHTML, like Gecko) Safari/413

Knowing this, we can tell the difference between a mobile phone visiting our page and a regular visitor viewing our page with Internet Explorer or Firefox for example. We can then write a script that will show different information to those users based on their user agent.

Sounds good, doesn’t it? Well, it works for basic things like mobile and non mobile versions of pages, but it’s also very easy to detect, fool, and circumvent. Firefox for example has a handy plug-in that allows you to change your user agent string to anything you want. Using that plug-in I can make the

script think that I am a Google search engine bot, thus rendering your cloaking completely useless. So, what else can we do if user agents are so easy to spoof?

IP Cloaking

Every visitor to your web site must first establish a connection with an ip address. These ip addresses resolve to dns servers which in turn identify the origin of that visitor. Every search engine crawler must identify itself with a unique signature viewable by reverse dns lookup. This means we have a sure fire method for identifying and cloaking based on ip address. This also means that we don’t rely on the user agent at all, so there is no way to circumvent ip based cloaking (although some caution must be taken as we will discuss). The most difficult part of ip cloaking is compiling a list of known search engine ip’s. Luckily software like

Blog Cloaker

and

SSEC

already does this for us. Once we have that information, we can then show different pages to different users based on the ip they visit our page with. For example, I can show a search engine bot a keyword targeted page full of key phrases related to what I want to rank for. When a human visits that same page I can show an ad, or an affiliate product so I can make some money. See the power and potential here?

So how can we detect ip cloaking? Every major search engine maintains a cache of the pages it indexes. This cache is going to contain the page as the search engine bot saw it at indexing time. This means your competition can view your cloaked page by clicking on the cache in the SERPS. That’s ok, it’s easy to get around that. The use of the meta tag noarchive in your pages forces the search engines to show no cached copy of your page in the search results, so you avoid snooping web masters. The only other method of detection involves ip spoofing, but that is a very difficult and time consuming thing to pull of. Basically you configure a computer to act as if it is using one of Google’s ip’s when it visits a page. This would allow you to connect as though you were a search engine bot, but the problem here is that the data for the page would be sent to the ip you are spoofing which isn’t on your computer, so you are still out of luck.

The lesson here? If you are serious about this, use ip cloaking. It is very difficult to detect and by far the most solid option.

Link Building

As we discussed earlier, Black Hats are Basically White Hats, only lazy! As we build pages, we also need links to get those pages to rank. Lets discuss some common and not so common methods for doing so.

Blog ping: This one is quite old, but still widely used. Blog indexing services setup a protocol in which a web site can send a ping whenever new pages are added to a blog. They can then send over a bot that grabs the page content for indexing and searching, or simply to add as a link in their blog directory. Black Hats exploit this by writing scripts that send out massive numbers of pings to various services in order to entice bots to crawl their pages. This method certainly drives the bots, but in the last couple years it has lost most of its power as far as getting pages to rank.

Trackback: Another method of communication used by blogs, trackbacks are basically a method in which one blog can tell another blog that it has posted something related to or in response to an existing blog post. As a black hat, we see that as an opportunity to inject links to thousands of our own pages by automating the process and sending out trackbacks to as many blogs as we can. Most blogs these days have software in place that greatly limits or even eliminates trackback spam, but it’s still a viable tool.

EDU links: A couple years ago Black Hats noticed an odd trend. Universities and government agencies with very high ranking web sites often times have very old message boards they have long forgotten about, but that still have public access. We took advantage of that by posting millions of links to our pages on these abandoned sites. This gave a HUGE boost to rankings and made some very lucky Viagra spammers millions of dollars. The effectiveness of this approach has diminished over time.

Forums and Guest books: The internet contains millions of forums and guest books all ripe for the picking. While most forums are heavily moderated (at least the active ones), that still leaves you with thousands in which you can drop links where no one will likely notice or even care. We’re talking about abandoned forums, old guest books, etc. Now, you can get links dropped on active forums as well, but it takes some more creativity. Putting up a post related to the topic on the forum and dropping your link In the BB code for a smiley for example. Software packages like Xrumer made this a VERY popular way to gather back links. So much so that most forums have methods in place to detect and reject these types of links. Some people still use them and are still successful.

Link Networks: Also known as link farms, these have been popular for years. Most are very simplistic in nature. Page A links to page B, page B links to page C, then back to A. These are pretty easy to detect because of the limited range of ip’s involved. It doesn’t take much processing to figure out that there are only a few people involved with all of the links. So, the key here is to have a very diverse pool of links.

Money Making Strategies

We now have a solid understanding of cloaking, how a search engine works, content generation, software to avoid, software that is pure gold and even link building strategies. So how do you pull all of it together to make some money?

he traffic you send it. You load up your money keyword list, setup a template with your ads or offers, then send all of your doorway/cloaked traffic to the index page. The Landing Page Builder shows the best possible page with ads based on what the incoming user searched for. Couldn’t be easier, and it automates the difficult tasks we all hate.

Affiliate Marketing: We all know what an affiliate program is. There are literally tens of thousands of affiliate programs with millions of products to sell. The most difficult part of affiliate marketing is getting well qualified targeted traffic. That again is where good software and cloaking comes into play. Some networks and affiliates allow direct linking. Direct Linking is where you setup your cloaked pages with all of your product keywords, then redirect straight to the merchant or affiliates sales page. This often results in the highest conversion rates, but as I said, some affiliates don’t allow Direct Linking. So, again, that’s where Landing Pages come in. Either building your own (which we are far too lazy to do), or by using something like Landing Page Builder which automates everything for us. Landing pages give us a place to send and clean our traffic, they also prequalify the buyer and make sure the quality of the traffic sent to the affiliate is as high as possible. After all, we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

Conclusion

As we can see, Black Hat Marketing isn’t all that different from White Hat marketing. We automate the difficult and time consuming tasks so we can focus on the important tasks at hand. I would like to thank you for taking the time to read this.

we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

 

 

 google lady Why Wait Days or Weeks For Google To List Your Website When You Can Use This Secret Method

Why Wait Days or Weeks For Google To List Your Website When You Can Use This Secret Method To Getting Your Website Listed on The First Page of Google In 60 Minutes or Less!!!



I don’t care if your site is 10 years old or 10 SECONDS old – I’ll show you the proof how you can get YOUR site on the first page of Google in 60 minutes or less!
 

What I am about to show you will allow you to dominate the entire first page of Google for YOUR niche with a brand new website, even if that site is just a single web page!!!

Want to know more……. 

Get the Secret Method Now……

google lego Why Wait Days or Weeks For Google To List Your Website When You Can Use This Secret Method

Click the Image to download

 
Never Have Enough Backlinks
Do It With A Push Of A Button

 

We all need to generate backlinks to our websites to get top listings in Google.

  • Problem is most times its a pain in the a%% without automation.
  • The other factor for some is the cost to buy automated tools.

Well I have a solution for both.

Just recently I was introduced to iMacros the free Firefox addon that can just about automate anything in your business.

For some of you programming is not an option, so we have this taken care of this for you. 

 

Here is what you will receive.

  • 51 of the top bookmarking websites that give you quality backlinks to your websites.


This is not just any set of bookmarking websites. These are the creame of the crop, I personally have clicked 1,000′s of URL’s to select the best. The bookmarking websites get indexed real fast in Google. The bookmarking websites have also been selected for not having to type in any captchas so this makes it easy for creating accounts quickly.

  • You will receive a macro to create your accounts in the bookmarking sites.
  • You will also receive 2 additional macros to submit your content to the bookmarking sites.


The great thing about working with these macros is you have complete control over your posting if you wish to play around with them. It is not difficult to make changes.

Download it all Here…….  FREE of Course……

CLICK HERE to Download

 

Discover The Little Known Secrets
To Creating Your Own Automatically Updating AutoBlogs!

It’s not easy to find out most of them are actually autoblogged. If you take care about your blog’s visitors, you take care about your quality content. If you have quality content and you have it regularly updated – Google and other search engines will love it!

bullet Autoblog Decoded Works for both Blogger and WordPress blogs!!
bullet Autoblog Decoded Create autoblogs whether you have content or not!
bullet Autoblog Decoded How to avoid de-indexing and banning your autoblogs!
bullet Autoblog Decoded Where to get the best suitable web hosting for your autoblogs!
bullet Autoblog Decoded What exact tools and plugins to use to correctly autoblog!
bullet Autoblog Decoded Where to get free traffic for your autoblogs!
bullet Autoblog Decoded How to avoid others to flag your autoblogs as spamblogs!
bullet Autoblog Decoded How to become master of autoblogs!
bullet Autoblog Decoded And much more!

If that’s not enough to convince you that you can finally earn some good AdSense checks and affiliate commissions, see here how some  autoblogs actually look like:

themes3 Autoblog Decoded livebetter2 Autoblog Decoded

 

Download it Here……

 

Click here to Download

 

Subscribe to BlackHatBUZZ




Free Money