Buy and Sell your Downloadable Items - Make FAST CASH !!
 

 

 

 

This method is so BLACKHAT that i was hesitant about sharing it.

twitter black hat Totally Blackhat Twitter CASH

Well this guide will teach you how to earn from Twitter the TOTALLY BlackHat way.

I have included a script that will automate the process and increase your earnings TEN fold.

SO SIMPLE EVEN THE NEWEST NOOB can Pull it off !!

 

Download it here……..

 

CLICK the image to download

twitter bird 300x198 Totally Blackhat Twitter CASH

 

 

 

 

How To Quickly Slam Easy CPA Cash Into Your Account By Hijacking Massive Amounts of Smokin’ Hot Traffic

 

CPA cash214 Easy CPA Cash SlamHOW WOULD YOU LIKE TO LEARN TO SEEK OUT HUGE SURGES OF PRIME TRAFFIC -  HIJACK IT DIRT CHEAP AND CONVERT IT LIKE CRAZY!

Here’s what this method will teach you ….
 
  • The exact technique to find massive amounts of premium traffic
  • How to profile your hijacked traffic
  • How to choose the right CPA offers for your traffic
  • Exactly what to do to convert this traffic

If you never did anything else with CPA marketing, the technique contained in this report is enough to make you thousands of dollars every month … Act Now!

 

Get it Here……….

Click here to Get CPA Cash Slam

 

 

 

This is a Black Hat method, not whitehat or grey hat, and yes it is raking in Tons of Money !!

The AutoPilot Income System

The method itself is innovative and unique, and because there are so few people using it at the moment it's so easy to make money doing it.

What you will learn:


The exact 8 steps that have been  tried, tested, and fine tuned to secure an
extra $9,000 a month in the bank each month that you can follow to
achieve the same results.

Why you don't need any SEO knowledge whatsoever for this to
work and how you will have huge amounts of prequalified traffic
without having to pay a penny!


A unique method currently being used by myself that is a surefire way to
get you started on the road to finincial stability!

REVEALED: You will discover why the 99.999% of people working a
9-5pm job are never going to earn the sort of money they want to be
earning, and how earning your income on autopilot without having to
lift a finger is the way towards true riches


This NEW concept being brought to the table that can be applied over
and over every single day as there is a never ending resource for you to
use to make this method work!

How working just a few hours to get this set up will in turn continue
to flood your bank with a continuous stream of income for the next
month with you having to do nothing to maintain it!

 

I Must be Nuts cause I am giving this away for FREE ! ! !

Hurry before I come to my senses !!

Enjoy and Prosper,

BHB signature2 $250 A Day With the Autopilot Income System

Download it Here ………..

Click Here to Download

 

 

**CAUTION**
 
ARE YOU READY TO MAKE AT LEAST $100 A DAY

 

This is the method you have been waiting for all year !!

 

 

In our simple, low-cost testing, we averaged $100 to $200 per day.
Scaled up this CAN and WILL make you an easy five figures a month – nearly on autopilot!

 

With minimal set up and minimal investment, you can start making the kind of easy money you all want. With this BLACKHAT method, you can see the kind of income in 2009 that you only dreamed about prior! You'll be cashing BIG CHECKS from the affiliate networks by the end of the month! Get Black Hat Cash Smash NOW and join the WINNING TEAM!

Our method includes all the information you need to get started right away, in a simple guide. It's straight to the point, with no fluff or filler – 100% lean, money making technique to get you started and earning from DAY ONE!

BHcash2 BlackHat Cash SmashNow – Let's Be Clear on a Few Things …..

1) This is BLACKHAT
2) This WILL require some basic knowledge of the internet and marketing, but that doesn't mean you need to be an IM expert to use it! This is good for all but the newest of noobs!
3) This method WILL require a small investment up front for traffic purchasing purposes – but it can be as little as $20!


We'll show you STEP BY STEP  how to start making HUNDREDS OF DOLLARS PER DAY with only a FEW HOURS OF SET UP TIME!

It's like a broken change machine – put money in, MORE MONEY COMES OUT!

 

Download it Here ……

CLICK HERE to Download

 

 

"Imagine – A Short Time From Now You Could Be Watching Your CPA Accounts Absolutely Flood With Cash While You Count Your Lucky Stars You Discovered These Secret Black Hat Methods!"

 

learnblackhat Now You Can Really Learn BlackHat Methods   Noob Friendly

 

OR You Could STILL  Be Throwing

Your Money Away On Another Lousy Ebook,
Wondering If You’ll Ever

Earn Any Money At All Online!

 

 

 

 

Anyone, anywhere, including YOU can learn and use these amazing black hat strategies to quickly and easily create an income of $300 per day or more (per strategy)!


If you are interested in quickly and quietly building a huge income online using my methods, you need to check your morals at the door.

You see, these methods are definitely black hat.  Now that does not mean that the methods are illegal or even immoral, just that they are frowned upon by the do-gooders.  For me, I couldn’t care less what they think.

Let The Rest Worry About What Is Moral And What Is Not While You And I Concentrate On Making Tons Of Cash!

That’s my motto. As long as I am not doing anything illegal, I don’t worry to much about whether my methods are considered white hat or black hat.


The thing is, these black hat strategies really work! It is not uncommon to make $300 per day, working 15 minutes per day! And that is only from one strategy. Truly there is a ton of money to be made with black hat methods.

So you must be wondering by now what some of these strategies are…

I’ve helped quite a few friends like yourself who wanted to quickly get up and running making good money online.

I came to the realization that there is so much money to be had with these methods that there was no reason for me to keep them to myself.

So, in keeping with the spirit of me making as much money as possible, I decided to release:

The Worlds Fastest, Easiest Blackhat Strategies To Make $300 per Day Or More Online!
 
This guide is for you if….



  • You are sick of paying for traffic. These methods are low cost or even free!
  •  
  • You want the straight goods – I am no Shakespeare….I give you step by step instructions to make a pile of dough!
  •  
  • You are tired spending hours a day just to make a few bucks – many of these strategies are set-and-forget!

 

Once You Discover The Power Of These Secret Black Hat Strategies, You Will Never Have Financial Worries Again !

 

Essentially, you can live the life you currently only dream of without slaving like a chump!  

 

So, in keeping with the spirit of me making as much money as possible, I decided to release:

You will discover hard core black hat strategies that really make money. Strategies like:

   
Check red Now You Can Really Learn BlackHat Methods   Noob Friendly The IPhone Bait And Switch Method (set this one up correctly and it can make you a mint for years to come)!

 

Check red Now You Can Really Learn BlackHat Methods   Noob Friendly The Mystery Shopper Extravaganza  (use this untapped source of labor and quietly rake in the dough)!

 

Check red Now You Can Really Learn BlackHat Methods   Noob Friendly YouTube Cash Grab (This one will make you money hand over fist!)

 

Check red Now You Can Really Learn BlackHat Methods   Noob Friendly Censored!   The last one is so dastardly I can’t even tell you here. Use ONLY in case of financial emergency… think of it as a switchblade in your back pocket, to be used when the need arises…

 

Well enough – I MUST BE CRAZY !!

But get it all FREE HERE……….

BHbuzz brought(1) Now You Can Really Learn BlackHat Methods   Noob Friendly

Click the IMAGE to Download

 

If you have spent any significant amount of time online, you have likely come across the term  Black Hat at one time or another.

This term is usually associated with many negative comments. This Article is here to address those comments and provide some insight into the real life of a Black Hat SEO professional.  I’ve been involved in internet marketing for close to 10 years now, the last 7 of which have been dedicated to Black Hat SEO. As we will discuss shortly, you can’t be a great Black Hat without first becoming a great White Hat marketer. With the formalities out of the way, lets get into the meat of things, shall we?

 

What is Black Hat SEO?

The million dollar question that everyone has an opinion on. What exactly is Black Hat SEO?

seo white black hat(2) Crash Course in BlackHat SEOThe answer here depends largely on who you ask. Ask most White Hats and they immediately quote the Google Webmaster Guidelines like a bunch of lemmings. Have you ever really stopped to think about it though? Google publishes those guidelines because they know as well as you and I that they have no way of detecting or preventing what they preach so loudly. They rely on droves of webmasters to blindly repeat everything they say because they are an internet powerhouse and they have everyone brainwashed into believing anything they tell them. This is actually a good thing though. It means that the vast majority of internet marketers and SEO professionals are completely blind to the vast array of tools at their disposal that not only increase traffic to their sites, but also make us all millions in revenue every year.

The second argument you are likely to hear is the age old ,“the search engines will ban your sites if you use Black Hat techniques”. Sure, this is true if you have no understanding of the basic principals or practices. If you jump in with no knowledge you are going to fail. I’ll give you the secret though. Ready? Don’t use black hat techniques on your White Hat domains. Not directly at least. You aren’t going to build doorway or cloaked pages on your money site, that would be idiotic. Instead you buy several throw away domains, build your doorways on those and cloak/redirect the traffic to your money sites. You lose a doorway domain, who cares? Build 10 to replace it. It isn’t rocket science, just common sense. A search engine can’t possibly penalize you for outside influences that are beyond your control. They can’t penalize you for incoming links, nor can they penalize you for sending traffic to your domain from other doorway pages outside of that domain. If they could, I would simply point doorway pages and spam links at my competitors to knock them out of the SERPS. See….. Common sense.

 

So again, what is Black Hat SEO? In my opinion, Black Hat SEO and White Hat SEO are almost no different. White hat web masters spend time carefully finding link partners to increase rankings for their keywords, Black Hats do the same thing, but we write automated scripts to do it while we sleep. White hat SEO’s spend months perfecting the on page SEO of their sites for maximum rankings, black hat SEO’s use content generators to spit out thousands of generated pages to see which version works best. Are you starting to see a pattern here? You should, Black Hat SEO and White Hat SEO are one in the same with one key difference. Black Hats are lazy. We like things automated. Have you ever heard the phrase "Work smarter not harder?" We live by those words. Why spend weeks or months building pages only to have Google slap them down with some obscure penalty.

If you have spent any time on web master forums you have heard that story time and time again. A web master plays by the rules, does nothing outwardly wrong or evil, yet their site is completely gone from the SERPS (Search Engine Results Pages) one morning for no apparent reason. It’s frustrating, we’ve all been there. Months of work gone and nothing to show for it. I got tired of it as I am sure you are. That’s when it came to me. Who elected the search engines the "internet police"? I certainly didn’t, so why play by their rules? In the following pages I’m going to show you why the search engines rules make no sense, and further I’m going to discuss how you can use that information to your advantage.

Search Engine 101

As we discussed earlier, every good Black Hat must be a solid White Hat. So, lets start with the fundamentals. This section is going to get technical as we discuss how search engines work and delve into ways to exploit those inner workings. Lets get started, shall we?

Search engines match queries against an index that they create. The index consists of the words in each document, plus pointers to their locations within the documents. This is called an inverted file. A search engine or IR (Information Retrieval) system comprises four essential modules:

A document processor

A query processor

A search and matching function

A ranking capability

While users focus on "search," the search and matching function is only one of the four modules. Each of these four modules may cause the expected or unexpected results that consumers get when they use a search engine.

Document Processor

The document processor prepares, processes, and inputs the documents, pages, or sites that users search against. The document processor performs some or all of the following steps:

Normalizes the document stream to a predefined format.

Breaks the document stream into desired retrievable units.

Isolates and meta tags sub document pieces.

Identifies potential indexable elements in documents.

Deletes stop words.

Stems terms.

Extracts index entries.

Computes weights.

Creates and updates the main inverted file against which the search engine searches in order to match queries to documents.

 

The document processor extracts the remaining entries from the original document. For example, the following paragraph shows the full text sent to a search engine for processing:

Milosevic’s comments, carried by the official news agency Tanjug, cast doubt over the governments at the talks, which the international community has called to try to prevent an all-out war in the Serbian province. "President Milosevic said it was well known that Serbia and Yugoslavia were firmly committed to resolving problems in Kosovo, which is an integral part of Serbia, peacefully in Serbia with the participation of the representatives of all ethnic communities," Tanjug said. Milosevic was speaking during a meeting with British Foreign Secretary Robin Cook, who delivered an ultimatum to attend negotiations in a week’s time on an autonomy proposal for Kosovo with ethnic Albanian leaders from the province. Cook earlier told a conference that Milosevic had agreed to study the proposal.

 

To reduce this text for searching  the following:

Milosevic comm carri offic new agen Tanjug cast doubt govern talk interna commun call try prevent all-out war Serb province President Milosevic said well known Serbia Yugoslavia firm commit resolv problem Kosovo integr part Serbia peace Serbia particip representa ethnic commun Tanjug said Milosevic speak meeti British Foreign Secretary Robin Cook deliver ultimat attend negoti week time autonomy propos Kosovo ethnic Alban lead province Cook earl told conference Milosevic agree study propos.

The output  is then inserted and stored in an inverted file that lists the index entries and an indication of their position and frequency of occurrence. The specific nature of the index entries, however, will vary based on the decision in Step 4 concerning what constitutes an "indexable term." More sophisticated document processors will have phrase recognizers, as well as Named Entity recognizers and Categorizers, to insure index entries such as Milosevic are tagged as a Person and entries such as Yugoslavia and Serbia as Countries.

Term weight assignment. Weights are assigned to terms in the index file. The simplest of search engines just assign a binary weight: 1 for presence and 0 for absence. The more sophisticated the search engine, the more complex the weighting scheme. Measuring the frequency of occurrence of a term in the document creates more sophisticated weighting, with length-normalization of frequencies still more sophisticated. Extensive experience in information retrieval research over many years has clearly demonstrated that the optimal weighting comes from use of "tf/idf." This algorithm measures the frequency of occurrence of each term within a document. Then it compares that frequency against the frequency of occurrence in the entire database.

Not all terms are good "discriminators" — that is, all terms do not single out one document from another very well. A simple example would be the word "the." This word appears in too many documents to help distinguish one from another. A less obvious example would be the word "antibiotic." In a sports database when we compare each document to the database as a whole, the term "antibiotic" would probably be a good discriminator among documents, and therefore would be assigned a high weight. Conversely, in a database devoted to health or medicine, "antibiotic" would probably be a poor discriminator, since it occurs very often. The TF/IDF weighting scheme assigns higher weights to those terms that really distinguish one document from the others.

Query Processor

Query processing has seven possible steps, though a system can cut these steps short and proceed to match the query to the inverted file at any of a number of places during the processing. Document processing shares many steps with query processing. More steps and more documents make the process more expensive for processing in terms of computational resources and responsiveness. However, the longer the wait for results, the higher the quality of results. Thus, search system designers must choose what is most important to their users — time or quality. Publicly available search engines usually choose time over very high quality, having too many documents to search against.

The steps in query processing are as follows (with the option to stop processing and start matching indicated as "Matcher"):

At this point, a search engine may take the list of query terms and search them against the inverted file. In fact, this is the point at which the majority of publicly available search engines perform the search.

Tokenize query terms.

Recognize query terms vs. special operators.

————————> Matcher

Delete stop words.

Stem words.

Create query representation.

————————> Matcher

Expand query terms.

Compute weights.

– — – — – — – –> Matcher

 

Step 1: Tokenizing. As soon as a user inputs a query, the search engine — whether a keyword-based system or a full natural language processing (NLP) system — must tokenize the query stream, i.e., break it down into understandable segments. Usually a token is defined as an alpha-numeric string that occurs between white space and/or punctuation.

Step 2: Parsing. Since users may employ special operators in their query, including Boolean, adjacency, or proximity operators, the system needs to parse the query first into query terms and operators. These operators may occur in the form of reserved punctuation (e.g., quotation marks) or reserved terms in specialized format (e.g., AND, OR). In the case of an NLP system, the query processor will recognize the operators implicitly in the language used no matter how the operators might be expressed (e.g., prepositions, conjunctions, ordering).

Steps 3 and 4: Stop list and stemming. Some search engines will go further and stop-list and stem the query, similar to the processes described above in the Document Processor section. The stop list might also contain words from commonly occurring querying phrases, such as, "I’d like information about." However, since most publicly available search engines encourage very short queries, as evidenced in the size of query window provided, the engines may drop these two steps.

Step 5: Creating the query. How each particular search engine creates a query representation depends on how the system does its matching. If a statistically based matcher is used, then the query must match the statistical representations of the documents in the system. Good statistical queries should contain many synonyms and other terms in order to create a full representation. If a Boolean matcher is utilized, then the system must create logical sets of the terms connected by AND, OR, or NOT.

An NLP system will recognize single terms, phrases, and Named Entities. If it uses any Boolean logic, it will also recognize the logical operators from Step 2 and create a representation containing logical

sets of the terms to be AND’d, OR’d, or NOT’d.

At this point, a search engine may take the query representation and perform the search against the inverted file. More advanced search engines may take two further steps.

Step 6: Query expansion. Since users of search engines usually include only a single statement of their information needs in a query, it becomes highly probable that the information they need may be expressed using synonyms, rather than the exact query terms, in the documents which the search engine searches against. Therefore, more sophisticated systems may expand the query into all possible synonymous terms and perhaps even broader and narrower terms.

This process approaches what search intermediaries did for end users in the earlier days of commercial search systems. Back then, intermediaries might have used the same controlled vocabulary or thesaurus used by the indexers who assigned subject descriptors to documents. Today, resources such as WordNet are generally available, or specialized expansion facilities may take the initial query and enlarge it by adding associated vocabulary.

Step 7: Query term weighting (assuming more than one query term). The final step in query processing involves computing weights for the terms in the query. Sometimes the user controls this step by indicating either how much to weight each term or simply which term or concept in the query matters most and must appear in each retrieved document to ensure relevance.

Leaving the weighting up to the user is not common, because research has shown that users are not particularly good at determining the relative importance of terms in their queries. They can’t make this determination for several reasons. First, they don’t know what else exists in the database, and document terms are weighted by being compared to the database as a whole. Second, most users seek information about an unfamiliar subject, so they may not know the correct terminology.

Few search engines implement system-based query weighting, but some do an implicit weighting by treating the first term(s) in a query as having higher significance. The engines use this information to provide a list of documents/pages to the user.

After this final step, the expanded, weighted query is searched against the inverted file of documents.

 

Search and Matching Function

How systems carry out their search and matching functions differs according to which theoretical model of information retrieval underlies the system’s design philosophy. Since making the distinctions between these models goes far beyond the goals of this article, we will only make some broad generalizations in the following description of the search and matching function.

Searching the inverted file for documents meeting the query requirements, referred to simply as "matching," is typically a standard binary search, no matter whether the search ends after the first two, five, or all seven steps of query processing. While the computational processing required for simple, unweighted, non-Boolean query matching is far simpler than when the model is an NLP-based query within a weighted, Boolean model, it also follows that the simpler the document representation, the query representation, and the matching algorithm, the less relevant the results, except for very simple queries, such as one-word, non-ambiguous queries seeking the most generally known information.

Having determined which subset of documents or pages matches the query requirements to some degree, a similarity score is computed between the query and each document/page based on the scoring algorithm used by the system. Scoring algorithms rankings are based on the presence/absence of query term(s), term frequency, tf/idf, Boolean logic fulfillment, or query term weights. Some search engines use scoring algorithms not based on document contents, but rather, on relations among documents or past retrieval history of documents/pages.

After computing the similarity of each document in the subset of documents, the system presents an ordered list to the user. The sophistication of the ordering of the documents again depends on the model the system uses, as well as the richness of the document and query weighting mechanisms. For example, search engines that only require the presence of any alpha-numeric string from the query occurring anywhere, in any order, in a document would produce a very different ranking than one by a search engine that performed linguistically correct phrasing for both document and query representation and that utilized the proven tf/idf weighting scheme.

However the search engine determines rank, the ranked results list goes to the user, who can then simply click and follow the system’s internal pointers to the selected document/page.

More sophisticated systems will go even further at this stage and allow the user to provide some relevance feedback or to modify their query based on the results they have seen. If either of these are available, the system will then adjust its query representation to reflect this value-added feedback and re-run the search with the improved query to produce either a new set of documents or a simple re-ranking of documents from the initial search.

What Document Features Make a Good Match to a Query

We have discussed how search engines work, but what features of a query make for good matches? Let’s look at the key features and consider some pros and cons of their utility in helping to retrieve a good representation of documents/pages.

Term frequency: How frequently a query term appears in a document is one of the most obvious ways of determining a document’s relevance to a query. While most often true, several situations can undermine this premise. First, many words have multiple meanings — they are polysemous. Think of words like "pool" or "fire." Many of the non-relevant documents presented to users result from matching the right word, but with the wrong meaning.

Also, in a collection of documents in a particular domain, such as education, common query terms such as "education" or "teaching" are so common and occur so frequently that an engine’s ability to distinguish the relevant from the non-relevant in a collection declines sharply. Search engines that don’t use a tf/idf weighting algorithm do not appropriately down-weight the overly frequent terms, nor are higher weights assigned to appropriate distinguishing (and less frequently-occurring) terms, e.g., "early-childhood."

Location of terms: Many search engines give preference to words found in the title or lead paragraph or in the meta data of a document. Some studies show that the location — in which a term occurs in a document or on a page — indicates its significance to the document. Terms occurring in the title of a document or page that match a query term are therefore frequently weighted more heavily than terms occurring in the body of the document. Similarly, query terms occurring in section headings or the first paragraph of a document may be more likely to be relevant.

those referred to by many other pages, or have a high number of "in-links"

Popularity: Google and several other search engines add popularity to link analysis to help determine the relevance or value of pages. Popularity utilizes data on the frequency with which a page is chosen by all users as a means of predicting relevance. While popularity is a good indicator at times, it assumes that the underlying information need remains the same.

Date of Publication: Some search engines assume that the more recent the information is, the more likely that it will be useful or relevant to the user. The engines therefore present results beginning with the most recent to the less current.

Length: While length per se does not necessarily predict relevance, it is a factor when used to compute the relative merit of similar pages. So, in a choice between two documents both containing the same query terms, the document that contains a proportionately higher occurrence of the term relative to the length of the document is assumed more likely to be relevant.

Proximity of query terms: When the terms in a query occur near to each other within a document, it is more likely that the document is relevant to the query than if the terms occur at greater distance. While some search engines do not recognize phrases per se in queries, some search engines clearly rank documents in results higher if the query terms occur adjacent to one another or in closer proximity, as compared to documents in which the terms occur at a distance.

Proper nouns sometimes have higher weights, since so many searches are performed on people, places, or things. While this may be useful, if the search engine assumes that you are searching for a name instead of the same word as a normal everyday term, then the search results may be peculiarly skewed. Imagine getting information on "Madonna," the rock star, when you were looking for pictures of Madonnas for an art history class.

Summary

Now that we have covered how a search engine works, we can discuss methods to take advantage of them. Lets start with content. As you saw in the above pages, search engines are simple test parsers. They take a series of words and try to reduce them to their core meaning. They can’t understand text, nor do they have the capability of discerning between grammatically correct text and complete gibberish. This of course will change over time as search engines evolve and the cost of hardware falls, but we black hats will evolve as well always aiming to stay at least one step ahead. Lets discuss the basics of generating content as well as some software used to do so, but first, we need to understand duplicate content. A widely passed around myth on web master forums is that duplicate content is viewed by search engines as a percentage. As long as you stay below the threshold, you pass by penalty free. It’s a nice thought, it’s just too bad that it is completely wrong.

Duplicate Content

I’ve read seemingly hundreds of forum posts discussing duplicate content, none of which gave the full picture, leaving me with more questions than answers. I decided to spend some time doing research to find out exactly what goes on behind the scenes. Here is what I have discovered.

Most people are under the assumption that duplicate content is looked at on the page level when in fact it is far more complex than that. Simply saying that “by changing 25 percent of the text on a page it is no longer duplicate content” is not a true or accurate statement. Lets examine why that is.

To gain some understanding we need to take a look at the k-shingle algorithm that may or may not be in use by the major search engines (my money is that it is in use). I’ve seen the following used as an example so lets use it here as well.

Let’s suppose that you have a page that contains the following text:

The swift brown fox jumped over the lazy dog.

Before we get to this point the search engine has already stripped all tags and HTML from the page leaving just this plain text behind for us to take a look at.

The shingling algorithm essentially finds word groups within a body of text in order to determine the uniqueness of the text. The first thing they do is strip out all stop words like and, the, of, to. They also strip out all fill words, leaving us only with action words which are considered the core of the content. Once this is done the following “shingles” are created from the above text. (I’m going to include the stop words for simplicity)

The swift brown fox

swift brown fox jumped

brown fox jumped over

fox jumped over the

jumped over the lazy

over the lazy dog

These are essentially like unique fingerprints that identify this block of text. The search engine can now compare this “fingerprint” to other pages in an attempt to find duplicate content. As duplicates are found a “duplicate content” score is assigned to the page. If too many “fingerprints” match other documents the score becomes high enough that the search engines flag the page as duplicate content thus sending it to supplemental hell or worse deleting it from their index completely.

My old lady swears that she saw the lazy dog jump over the swift brown fox.

The above gives us the following shingles:

my old lady swears

old lady swears that

lady swears that she

swears that she saw

that she saw the

she saw the lazy

saw the lazy dog

the lazy dog jump

lazy dog jump over

dog jump over the

jump over the swift

over the swift brown

the swift brown fox

Comparing these two sets of shingles we can see that only one matches (”the swift brown fox“). Thus it is unlikely that these two documents are duplicates of one another. No one but Google knows what the percentage match must be for these two documents to be considered duplicates, but some thorough testing would sure narrow it down ;).

So what can we take away from the above examples? First and foremost we quickly begin to realize that duplicate content is far more difficult than saying “document A and document B are 50 percent similar”. Second we can see that people adding “stop words” and “filler words” to avoid duplicate content are largely wasting their time. It’s the “action” words that should be the focus. Changing action words without altering the meaning of a body of text may very well be enough to get past these algorithms. Then again there may be other mechanisms at work that we can’t yet see rendering that impossible as well. I suggest experimenting and finding what works for you in your situation.

The last paragraph here is the real important part when generating content. You can’t simply add generic stop words here and there and expect to fool anyone. Remember, we’re dealing with a computer algorithm here, not some supernatural power. Everything you do should be from the standpoint of a scientist. Think through every decision using logic and reasoning. There is no magic involved in SEO, just raw data and numbers. Always split test and perform controlled experiments.

What Makes A Good Content Generator?

Now we understand how a search engine parses documents on the web, we also understand the intricacies of duplicate content and what it takes to avoid it. Now it is time to check out some basic content generation techniques.

One of the more commonly used text spinners is known as Markov. Markov isn’t actually intended for content generation, it’s actually something called a Markov Chain which was developed by mathematician Andrey Markov. The algorithm takes each word in a body of content and changes the order based on the algorithm. This produces largely unique text, but it’s also typically VERY unreadable. The quality of the output really depends on the quality of the input. The other issue with Markov is the fact that it will likely never pass a human review for readability. If you don’t shuffle the Markov chains enough you also run into duplicate content issues because of the nature of shingling as discussed earlier. Some people may be able to get around this by replacing words in the content with synonyms. I personally stopped using Markov back in 2006 or 2007 after developing my own proprietary content engine. Some popular software that uses Markov chains include

RSSGM

and

YAGC

both of which are pretty old and outdated at this point. They are worth taking a look at just to understand the fundamentals, but there are FAR better packages out there.

So, we’ve talked about the old methods of doing things, but this isn’t 1999, you can’t fool the search engines by simply repeating a keyword over and over in the body of your pages (I wish it were still that easy). So what works today? Now and in the future, LSI is becoming more and more important. LSI stands for Latent Semantic Indexing. It sounds complicated, but it really isn’t. LSI is basically just a process by which a search engine can infer the meaning of a page based on the content of that page. For example, lets say they index a page and find words like atomic bomb, Manhattan Project, Germany, and Theory of Relativity. The idea is that the search engine can process those words, find relational data and determine that the page is about Albert Einstein. So, ranking for a keyword phrase is no longer as simple as having content that talks about and repeats the target keyword phrase over and over like the good old days. Now we need to make sure we have other key phrases that the search engine thinks are related to the main key phrase.

So if Markov is easy to detect and LSI is starting to become more important, which software works, and which doesn’t?

Software

Fantomaster Shadowmaker: This is probably one of the oldest and most commonly known high end cloaking packages being sold. It’s also one of the most out of date. For $3,000.00 you basically get a clunky outdated interface for slowly building HTML pages. I know, I’m being harsh, but I was really let down by this software. The content engine doesn’t do anything to address LSI. It simply splices unrelated sentences together from random sources while tossing in your keyword randomly. Unless things change drastically I would avoid this one.

SEC (Search Engine Cloaker): Another well known paid script. This one is of good quality and with work does provide results. The content engine is mostly manual making you build sentences which are then mixed together for your content. If you understand SEO and have the time to dedicate to creating the content, the pages built last a long time. I do have two complaints. The software is SLOW. It takes days just to setup a few decent pages. That in itself isn’t very black hat. Remember, we’re lazy! The other gripe is the ip cloaking. Their ip list is terribly out of date only containing a couple thousand ip’s as of this writing.

 
SSEC

or

Simplified Search Engine Content

This is one of the best IP delivery systems on the market. Their ip list is updated daily and contains close to 30,000 ip’s. The member only forums are the best in the industry. The subscription is worth it just for the information contained there. The content engine is also top notch. It’s flexible, so you can chose to use their proprietary scraped content system which automatically scrapes search engines for your content, or you can use custom content similar in fashion to SEC above, but faster. You can also mix and match the content sources giving you the ultimate in control. This is the only software as of this writing that takes LSI into account directly from within the content engine. This is also the fastest page builder I have come across. You can easily put together several thousand sites each with hundreds of pages of content in just a few hours. Support is top notch, and the knowledgeable staff really knows what they are talking about. This one gets a gold star from me.

BlogSolution: Sold as an automated blog builder, BlogSolution falls short in almost every important area. The blogs created are not wordpress blogs, but rather a proprietary blog software specifically written for BlogSolution. This “feature” means your blogs stand out like a sore thumb in the eyes of the search engines. They don’t blend in at all leaving footprints all over the place. The licensing limits you to 100 blogs which basically means you can’t build enough to make any decent amount of money. The content engine is a joke as well using rss feeds and leaving you with a bunch of easy to detect duplicate content blogs that rank for nothing.

Blog Cloaker

Another solid offering from the guys that developed SSEC. This is the natural evolution of that software. This mass site builder is based around wordpress blogs. This software is the best in the industry hands down. The interface has the feel of a system developed by real professionals. You have the same content options seen in SSEC, but with several different redirection types including header redirection, JavaScript, meta refresh, and even iframe. This again is an ip cloaking solution with the same industry leading ip list as SSEC. The monthly subscription may seem daunting at first, but the price of admission is worth every penny if you are serious about making money in this industry. It literally does not get any better than this.

Cloaking

So what is cloaking? Cloaking is simply showing different content to different people based on different criteria. Cloaking automatically gets a bad reputation, but that is based mostly on ignorance of how it works. There are many legitimate reasons to Cloak pages. In fact, even Google cloaks. Have you ever visited a web site with your cell phone and been automatically directed to the mobile version of the site? Guess what, that’s cloaking. How about web pages that automatically show you information based on your location? Guess what, that’s cloaking. So, based on that, we can break cloaking down into two main categories, user agent cloaking and ip based cloaking.

User Agent cloaking is simply a method of showing different pages or different content to visitors based on the user agent string they visit the site with. A user agent is simply an identifier that every web browser and search engine spider sends to a web server when they connect to a page. Above we used the example of a mobile phone. A Nokia cell phone for example will have a user agent similar to: User-Agent: Mozilla/5.0 (SymbianOS/9.1; U; [en]; Series60/3.0 NokiaE60/4.06.0) AppleWebKit/413 (KHTML, like Gecko) Safari/413

Knowing this, we can tell the difference between a mobile phone visiting our page and a regular visitor viewing our page with Internet Explorer or Firefox for example. We can then write a script that will show different information to those users based on their user agent.

Sounds good, doesn’t it? Well, it works for basic things like mobile and non mobile versions of pages, but it’s also very easy to detect, fool, and circumvent. Firefox for example has a handy plug-in that allows you to change your user agent string to anything you want. Using that plug-in I can make the

script think that I am a Google search engine bot, thus rendering your cloaking completely useless. So, what else can we do if user agents are so easy to spoof?

IP Cloaking

Every visitor to your web site must first establish a connection with an ip address. These ip addresses resolve to dns servers which in turn identify the origin of that visitor. Every search engine crawler must identify itself with a unique signature viewable by reverse dns lookup. This means we have a sure fire method for identifying and cloaking based on ip address. This also means that we don’t rely on the user agent at all, so there is no way to circumvent ip based cloaking (although some caution must be taken as we will discuss). The most difficult part of ip cloaking is compiling a list of known search engine ip’s. Luckily software like

Blog Cloaker

and

SSEC

already does this for us. Once we have that information, we can then show different pages to different users based on the ip they visit our page with. For example, I can show a search engine bot a keyword targeted page full of key phrases related to what I want to rank for. When a human visits that same page I can show an ad, or an affiliate product so I can make some money. See the power and potential here?

So how can we detect ip cloaking? Every major search engine maintains a cache of the pages it indexes. This cache is going to contain the page as the search engine bot saw it at indexing time. This means your competition can view your cloaked page by clicking on the cache in the SERPS. That’s ok, it’s easy to get around that. The use of the meta tag noarchive in your pages forces the search engines to show no cached copy of your page in the search results, so you avoid snooping web masters. The only other method of detection involves ip spoofing, but that is a very difficult and time consuming thing to pull of. Basically you configure a computer to act as if it is using one of Google’s ip’s when it visits a page. This would allow you to connect as though you were a search engine bot, but the problem here is that the data for the page would be sent to the ip you are spoofing which isn’t on your computer, so you are still out of luck.

The lesson here? If you are serious about this, use ip cloaking. It is very difficult to detect and by far the most solid option.

Link Building

As we discussed earlier, Black Hats are Basically White Hats, only lazy! As we build pages, we also need links to get those pages to rank. Lets discuss some common and not so common methods for doing so.

Blog ping: This one is quite old, but still widely used. Blog indexing services setup a protocol in which a web site can send a ping whenever new pages are added to a blog. They can then send over a bot that grabs the page content for indexing and searching, or simply to add as a link in their blog directory. Black Hats exploit this by writing scripts that send out massive numbers of pings to various services in order to entice bots to crawl their pages. This method certainly drives the bots, but in the last couple years it has lost most of its power as far as getting pages to rank.

Trackback: Another method of communication used by blogs, trackbacks are basically a method in which one blog can tell another blog that it has posted something related to or in response to an existing blog post. As a black hat, we see that as an opportunity to inject links to thousands of our own pages by automating the process and sending out trackbacks to as many blogs as we can. Most blogs these days have software in place that greatly limits or even eliminates trackback spam, but it’s still a viable tool.

EDU links: A couple years ago Black Hats noticed an odd trend. Universities and government agencies with very high ranking web sites often times have very old message boards they have long forgotten about, but that still have public access. We took advantage of that by posting millions of links to our pages on these abandoned sites. This gave a HUGE boost to rankings and made some very lucky Viagra spammers millions of dollars. The effectiveness of this approach has diminished over time.

Forums and Guest books: The internet contains millions of forums and guest books all ripe for the picking. While most forums are heavily moderated (at least the active ones), that still leaves you with thousands in which you can drop links where no one will likely notice or even care. We’re talking about abandoned forums, old guest books, etc. Now, you can get links dropped on active forums as well, but it takes some more creativity. Putting up a post related to the topic on the forum and dropping your link In the BB code for a smiley for example. Software packages like Xrumer made this a VERY popular way to gather back links. So much so that most forums have methods in place to detect and reject these types of links. Some people still use them and are still successful.

Link Networks: Also known as link farms, these have been popular for years. Most are very simplistic in nature. Page A links to page B, page B links to page C, then back to A. These are pretty easy to detect because of the limited range of ip’s involved. It doesn’t take much processing to figure out that there are only a few people involved with all of the links. So, the key here is to have a very diverse pool of links.

Money Making Strategies

We now have a solid understanding of cloaking, how a search engine works, content generation, software to avoid, software that is pure gold and even link building strategies. So how do you pull all of it together to make some money?

he traffic you send it. You load up your money keyword list, setup a template with your ads or offers, then send all of your doorway/cloaked traffic to the index page. The Landing Page Builder shows the best possible page with ads based on what the incoming user searched for. Couldn’t be easier, and it automates the difficult tasks we all hate.

Affiliate Marketing: We all know what an affiliate program is. There are literally tens of thousands of affiliate programs with millions of products to sell. The most difficult part of affiliate marketing is getting well qualified targeted traffic. That again is where good software and cloaking comes into play. Some networks and affiliates allow direct linking. Direct Linking is where you setup your cloaked pages with all of your product keywords, then redirect straight to the merchant or affiliates sales page. This often results in the highest conversion rates, but as I said, some affiliates don’t allow Direct Linking. So, again, that’s where Landing Pages come in. Either building your own (which we are far too lazy to do), or by using something like Landing Page Builder which automates everything for us. Landing pages give us a place to send and clean our traffic, they also prequalify the buyer and make sure the quality of the traffic sent to the affiliate is as high as possible. After all, we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

Conclusion

As we can see, Black Hat Marketing isn’t all that different from White Hat marketing. We automate the difficult and time consuming tasks so we can focus on the important tasks at hand. I would like to thank you for taking the time to read this.

we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

 

 

 

I HAVE GONE CRAZY!!!

yelp blackhat(2) CRAZY BLACKHAT AND WHITEHAT MONEY METHODS


REVEALING ALL MY CRAZY BLACKHAT AND WHITEHAT METHODS

THAT MADE ME 150$/DAY OVER THE LAST 4 DAYS!


YOU CAN DO THIS EASILY, I GUARANTEE IT!


All my CPA Methods that have worked for me.


This guide will teach you everything there is to know about CPA then some blackhat and whitehat methods proven to make you money!


All these methods are so easy, anyone can implement them!


Most of these don’t require a penny from you!

 

Well , only if you wanna make some money – GET THEM HERE……..

 

CLICK HERE TO DOWNLOAD

 

 

 

"You’re About To Discover A Brand New Profitable CPA Cash System That Will Literally Blow Your Mind."

wso header CPA Prophet System

The method is called the "CPA Prophet System"

This is a system I developed for promoting CPA offers with virtually no competition.

Yes, you read right, NO COMPETITION!

With this system I do not foresee any saturation in any niche.

Once this system is in place you can tell within 5 days if you’re running a successful CPA campaign.

I have made $2,520 in one week using this system.

Really! This system is so simple even my 17yr old high school daughter is making money from it. While her friends are ready to give up their lives to work for the summer, my daughter is enjoying the money she is making and her friends are so jealous. After father’s day my daughter and I are going to look at cars. She is planning to buy her first car with the money she’s made.

If this System can make money for a high school kid, it can defiantly make money for you!

What this system is Not:

  • It’s Not Pay Per Click,
  • It’s Not Pay Per View,
  • It’s not placing Ads on Craigslist,
  • It’s Not eWhoring,
  • It has nothing to do with email,  or social media.
  • No placing ads offline

It’s just a proven system that just works. There is an investment to get this off the ground but once you get it going then the sky’s the limit.

As a matter of fact dare I say that this system is brand new and no one has ever done this before?

Get it Now…….

Click Here to Download

 

 

 

 

bhimg2 Sneaky Submit Black Hat MethodThis 15 page report outlines a great black hat method to rake in tons of e-mail submits from unsuspecting Internet users.

Easily Make $150/Day on Autopilot PER CPA NETWORK

Only Limit on Earnings is How Many CPA Account You Want to Spread Money Over to Stay Under the Radar!

Includes full method for building this black hat sneaky submit website as well as blanking the referrer to keep you safe from your affiliate manager, how to set up a delayed autoresponder in gmail and even how to get all the leads you can handle – all on autopilot!

Includes step-by-step instructions that anyone, no matter your knowledge, can follow to earn big!

Plus a Free Bonus: How to Get Accepted into a CPA Network is included so that you can make as many CPA accounts as you want and always get accepted!

 

 

Sneaky Submit Black Hat Method:

Step 1: Building the Sneaky Submit Website

Step 2: Getting the Traffic and Making the $$

Delayed Gmail Autoresponder Setup

Getting Email Leads via Craigslist

How to Send Delayed Autoresponses

Spreading Legitimate Traffic

 

Get another Great Share from BLACKHATBUZZ

Download it Here…….

Click here to Download

 

 

Important Disclaimer: Some, but not all, of the strategies, techniques, and methods I share in The Dirty Secrets of Hyper-Speed Info Products course may be considered "Grey Hat" and may result in:

  • Server-Crippling Traffic To Your Websites
  • Shocking Sales Conversions For Your Products
  • Stampedes of Hungry Buyers Opting-In To Your Lists
  • So Much Automated Income Each Month You May Forget How To Spell "Recession"
It’s Time To Stop Making Them Rich and MAKE YOURSELF RICH

The Dirty Secrets of Hyper-Speed Info Products…

Dirty Secret #1: The Dirty Secrets of Hyper-Speed Info Products can have you generating sales for your product making you money today – not some "guru".

Dirty Secret #2: No one else is using these methods… How do I know? Because if they were, the forums would be flooded with talk about them and the “gurus” would be charging $2,000+ for the info.

Dirty Secret #3: I don’t have to worry about driving traffic to my sites – my affiliates do that for me.

Dirty Secret #4: I spend less than 5 hours making COMPLETE products and that INCLUDES the time it takes to create the basic website!

Dirty Secret #5: Pay-Per-Click SUCKS! – I let my affiliates battle Adwords.

Dirty Secret #6: ENTIRE membership e-courses that I spend less than 3 or 4 hours setting up bring in between $10 and $127 per month APIECE

Dirty Secret #7: Blackhat? Why bother? My customers email me asking when my next product’s coming out before I’ve even thought of what it’s going to be about!

Dirty Secret #8: Short reports that take me an hour (or less!) to create using these secret methods sell for $10 to $47 each. These really add up fast…I sell hundreds a month and you will too.

Dirty Secret #9: I don’t wait to get paid. I drink my morning coffee adding up my Paypal balance.

Dirty Secret #10: Even though these products take me no time at all to create, customers love them so much they email me asking for more before I’ve even had time to think up new material! How much easier can “selling” info products be?

Dirty Secret #11: Recession? What Recession? The economic downturn is boosting sales!

Dirty Secret #12: Saturation Proof. I can enter ANY niche, no matter how competitive, and dominate immediately.

Dirty Secret #13: 100% Unique content that NO ONE ELSE has – NO PLR, NO MRR, NO Rehashed, Recycled crap (admittedly, sometimes I’ll throw in MRR bonuses if they’re good enough, but my actual products are 100% unique)

Get them HERE…………….

To download it CLICK HERE

 

I’ve decided to unveil the method that can be used to make over $330 in a single day, from a single Facebook Group, with no investment whatsoever.

 

But that’s only part of this awesome marketing guide, and only a fraction of the package you will receive. Not only will this guide show you the exact methods we’ve used to generate hundreds of dollars a day from Facebook Groups, you’ll also learn many other unbelievably lucrative methods that you won’t see ANYWHERE else.

Here are some of the things you will learn how to do with this guide:

1. The secret method  used to generate over $330 from one Facebook Group in one day, with zero investment.

2. How to promote Groups effectively and efficiently.

3. How to send 25,000 to 100,000+ emails EVERY DAY for FREE from Facebook’s servers.

4. How to instantly get HUNDREDS to THOUSANDS of Facebook Friends.

5. How to utilize Facebook’s viral channels to drive clicks or traffic anywhere you want.

6. How to create Groups that people feel they NEED to join.

7. Learn how to setup a simple Facebook application.

8. Learn an overall method of promotion that can be applied anywhere on the web to make you huge amounts of money.

Not only will this awesome package teach you all of this, you will also learn the complete ins and outs of Facebook. Learn everything there is to know about Facebook’s viral channels and how to use and/or abuse them. This package is meant for EVERYONE, from beginner to advanced FB marketers. We guarantee there is something to learn for everyone here. We cover the basics of Facebook marketing, so a bit may be redundant for those who are experienced FB marketers, but I’m sure you will still learn a thing or two. After that comes the juicy stuff, the secret stuff. This will be new to everyone. Like I said, if you’re an advanced marketer you can jump right in, if you’re just getting started, this package WILL introduce you to Facebook and it’s unique features and guide you through the entire process, from beginning to end. Plus a LOT more.

This is a 29 Page, Full-Color, Illustrated PDF Guide

Here’s what is included in the package:

1. FREE Facebook Friends Video Lessons, Part I and II

2. Secret Method Overview Tutorial Video

3. Simple Facebook Application Script (this will earn you lots of money!)

4. 8 Page, Full-Color, Illustrated PDF Easy Installation Guide For Simple Facebook Application Script.

 

Download the Complete package Below:

 

Click the Pic to Download…

facebook ULTIMATE Facebook Marketing Method! Earn THOUSANDS EVERY DAY

Legal Cookie Stuffing

Posted by admin under General

 

This is a little trick that can be used to make thousands of dollars a month driving traffic to simple review pages for affiliate products. It is almost too simple to take seriously, but try it and see for yourself.

Cookie stuffing doesn’t always have to be against the terms of service. I constantly get permission from merchants to cookie stuff my link on my own pages.

The bottom line is that you are spending time and money to get potential customers to your landing page/affiliate link, so therefore, if they buy, you deserve the commission.

The problem with affiliate links is that almost everyone knows what they are these days, and I don’t care what you do to pretty them up.

Regardless if it’s:

www.domain.com/ref?aff=1287 (says I’m an affiliate who has no idea what I’m doing)

or

www.tinyurl.com/brb18s (says i’m an affiliate that makes no money because you’ll never click this)

or

www.mydomain.com/recommends/youngteenagechicks (says I may have power link generator but people still know what I’m trying to pull)

or even………….

www.domain.com secretly embedded with your affiliate link (says I’m at least sneaky about it but you were also smart enough to hover the mouse over)

A good percentage of the time you are losing the chance at those visitors because you have now lost credibility instantly.

The rest of the time is spent losing the commission because the buyer just hacks your affiliate link off the domain, or does a type in.

People today are so against you making money that even if you do get them to click an embedded cloaked link like xyxyxyxy, once they see what the site is they will actually delete their cookies and visit again so your cookie is in the trash can.

If instead of the above, you did a review site or landing page that told the user to go to www.domain.com with no affiliate link, it looks like a real deal review.

I only use this for high traffic sites like a webcams.com, buy.com or new hot IM products that are gaining a buzz.

The whole point is that the person knows instantly that its not an affiliate link, yet it doesn’t matter because as soon as they landed on your page they were stuffed with your cookie.

This also works fantastically because a lot of times a person will search multiple sites looking for information and yours is seldom the one that causes the sale right then and there.

Luckily, since you stuffed that cookie, and that visitor probably won’t be clicking any affiliate links, they will simply do a type in like www.commissionblueprint.com when they decide to buy a couple of hours or days later, and guess what?

You still got the commission.

What’s most comical about this is that I really didn’t want to give this little gem up, but you’re probably thinking it’s a waste of time or at least something you’ll save for a lot later.

It’s a shame because if I didn’t make the money I do from other streams, this one tactic would be my full time job and I could easily earn $300,000 or more a year setting up stuffed page after stuffed page.


One of the best ways to do this is to look like a total noob, and by that I mean set up a blog called something stupid like www.darwinscoolblog.com.

Instead of your typical review site page put up just for a specific product, make it a blog post as if you bought the product, use the service, etc and make it seem like you are just having general conversation about it.

The point of this step is actually psychological. Even without an affiliate link your URL is www.whatevertheproductisreview.com, and you are advertising it, your visitors will know that something is up, and if nothing else, assume you probably own the product and are promoting it yourself.

If it’s just some random blog, many people will assume you’re just promoting it for readership and not whatever post they happened to land on. This, in their minds, makes you an unbiased third party on the subject.

Everyone thinks cookie stuffing is mainly for ebay and that it has to be illegal in the eyes of the merchant, and that’s just not the case, nor is it the smart way to utilize such an amazing income generating tool.

Cookie stuffing

Before even considering cookie stuffing please read this  post on dropping affiliate Cookies.  It’s not my place to judge people and their methods but I want to at least point out the moral and legal implications before you go running amok and stuffing cookies everywhere. This page isn’t because this is a new amazing method of making money, its old and pretty much talked about everywhere. Ths page is here as a result of a debate elsewhere. If you already have an idea on the different cookie stuffing methods, what’s involved etc then read This  updated post on Cookie Stuffing.

What is cookie stuffing?

As a normal affiliate you would signup to an affiliate program such as with ebay and then promote the link they give you on your own website. When someone clicks on the link and goes through to ebay-  a cookie is put onto their system to track them and if they purchase something you earn a little bit of money.

However when you’re cooking stuffing you don’t actually send visitors to ebay, you simply force the cookie onto their system in the background without them ever knowing. This means you don’t have to drive traffic to them or give them any kind of promotion at all. And because ebay is so big, the chances are a lot of your visitors are going to buy something from them at some point anyway.

How can I start stuffing cookies?

There are several methods of stuffing cookies. There are some paid solutions out there but I can’t see they offer much/any benefit over doing it yourself.

The solution you use will depend on how much control you have over the site. For example you will use a different method on sites you own yourself against other peoples forums you signup and post to.

All of these following examples are going to be based on the victim merchant being ebay. This is just a random choice and any affiliate program could be used. I’m going to use a made up url of http://www.ebay.com/?affid=233499

I have created this Resource File which includes the code for each of these examples.

Download it Here

cookie 0611 02 All about Cookie Stuffing

The most basic method ever..

The most basic way of stuffing a cookie would to use a html img tag which references the affiliate page which drops the cookie. The visitors web browser will goto this page, even though its not an image and will accept the cookies returned.

Iframe cookie stuffing

Description: This is one of the oldest and most simplest methods out there. Most people who cookie stuff have started out using this method. Basically you put a 1 pixel iframe on your existing website and everytime someone visits your site, the affiliate page is loaded within the iframe and the cookies are dropped onto the visitors system.

Resource folder /iframes/1/

Description: You literally just take your affiliate link and make a 1 pixel iframe with the source being the affiliate link.

Pros: The biggest pro point of this is that its extremely easy and just about anyone can do it without even having to think about it. To improve your chances of not getting discovered running the hidden iframe you should ensure that there is actually a [ebay] banner or texual link on the same page as the iframe so that at first look the advertiser will think you are sending them genuine traffic.

Cons: This is quite an easy method to pick up on. The merchant or affiliate company simply needs to view the html source code of your site and see the hidden iframe.

Resource folder /iframes/2/

Description: You again go with the same idea of a 1 pixel iframe but instead of having the iframe in your normal page you include an external javascript file which obfuscates the iframe html code. You can find thousands of free online services which will obfuscate your code by searching for ‘html encryption’. For example you could create stats.js which holds the obfuscated iframe and then include it within your normal page.

Pros: Even if the merchant checks your html code, they’re just going to see normal html and are unlikely to think anything of the javascript file. Even if they do then they won’t understand the contents of the javascript file because it’s been obfuscated.

Cons: Some advanced merchants might go to the extreme of checking all your javascript files and then de-obfuscating your code.

Resource folder /iframes/3/

Description: You may be thinking that the affiliate is going to check your external javascript files and then de-obfuscate the html.   Okay well how about another layer or protection! We will use htaccess to tell the server to treat our JS file as a php file and then check the referer. If there is no referring page then we know someone has gone direct to the javascript file and we will output some bullshit JS else we output the real stuff.

Pros: Even if the merchant checks your html code, they’re just going to see normal html and are unlikely to think anything of the javascript file. Even if they do then they won’t understand the contents of the javascript file because it’s been obfuscated.

Cons: Again if you get a merchant on interweb steroids then they may send a fake referer to the javascript file to see if you’re cloaking the content based on referer. Very unlikely but possible. Another problem is that if they sniffed the raw packets when viewing your main site then they’d see the code come in. This is even more unlikely and they’d still have to them de-obfuscate your code.

Overall pros of the iframe methods: These methods can be used very simply and setup extremely quickly. They’re the starting step for most cookie stuffers and give you a good introduction into how it works. You would work upon these scripts with different ways to protect yourself from being caught.

Overall cons of the iframe methods: The biggest con of this is that at the bottom of the visitors browser window they might spot the affiliate url as the page is loaded in the background.

Image cookie stuffing

Description: This method is a little more advanced and secure than the iframe methods. This time you include a standard image on your page but set the source of the image file as being the affiliate link. The browser will follow this and although it won’t be able to load it as an image (since its actually a webpage), it will still read and act on the headers returned, and as we know.. cookies are sent via headers. We set the alt of the image as a space so that when it doesn’t load it simply produces a blank space rather than a broken image picture.

Resource folder /image/1/

Description: You literally just take your affiliate link and make add a new image to your page with the source being the affiliate link. You set the alt text to a space so that no broken image picture is displayed.

Pros: This is better than iframe methods because instead of many urls passing in the visitors browser for the affiliate page as each component within the iframe loads, there will only be one url and it will pass very quickly.

Cons: Just like the iframe 1 method, the affiliate/merchant could view source and see something sus. is going on pretty easily.

Resource folder /image/2/

Description: This time to decrease the chances of getting caught we actually include what appears to be a local jpg file but infact it’s a php file which uses a redirection header to send the browser onto the affiliate page. Just like iframe method 3 we check referer so that if someone goes direct to the page they wont see the redirect.

Pros: Even if the affiliate/merchant checks your source code then they’re almost certainly not going to think anything of just another image tag within your code.

Cons: The visitor/merchant might spot their domain at the bottom of the browser as it passes by once quickly.

One huge pro about using the image method is that you can signup to OTHER PEOPLES forums and then post the image link in your signature. For example you signup to a poker room who pay $100 for every customer you get to join them. Then you go signup to a huge poker forum, you stick the image in your signature and start posting on the forum. Before you know it you have dropped your cookies on everyone on that forum and the chances are quite high they’re going to go signup for a poker room anyway. You can’t do this with the iframe method since most forums won’t allow you to post html.

The Script Adult Sites Use To Generate Thousands Of Unique Visitors And Make Almost Any Site Viral Instantly!

The “Revenge” Site Script

I’m giving away a script that can be used to make sites like the following:

WARNING: Mature content below.

Click here for sample viral Site

 

 

virral Get the Viral Revenge Script

CLICK HERE TO DOWNLOAD

 

  How does 355,000 visitors to your site in less than a month sound with $0 on ads spent and less than 2 hours of work?

How about $2500+ profit within 3 days of launching a website, with no affiliates and no product?

How would you like to turn your website visitors into link dropping machines who promote your website every chance they get without paying them a dime?

Sound too good to be true? It’s true.

 The script allows you to reward your visitors for sending

traffic to your site. Here are just a few ideas…

“Send 10 visitors to my sales page and get a free report”.

CLICK HERE For another sample site using this Technique

“Send 25 people to my squeeze page and get access to my secret video”.

Included along with the script you will receive an information package with tons of great ideas to startup your site using The Viral Script. Here are a few samples from the package:

“If you own an Ebook Product you can split up the book into small chunks of text. Allowing a visitor to see more of the ebook by sending visitors.”

“If your into adult then you have tons of options with The Viral Script. Put up a site with a bunch of high quality videos in whatever niche you choose and let visitors unlock more videos with the more views they send.”

“Watching TV shows online is becoming very popular so make a site with a bunch of episodes for a particular tv season. The more visits your visitors send the more of that season they get to watch. Find some good converting Email Submits and this could be very popular.”

The script can be used for product promotion, list building, affiliate marketing, traffic contests, you name it, the power of “The  Script” is only limited by your imagination.

 

Subscribe to BlackHatBUZZ




Free Money