Get PAID everytime someone downloads your Files - MAKE OVER $100 PER DAY
 

 

 

clickbank free Over 1250 ClickBank Products for FREE !!

 

I am assuming that most of you know about Clickbank Products.  Good, bad or indifferent is really not the issue.   Over the past couple of years there have been many methods to search out and find download pages for Clickbank products.   We even wrote a post about a year ago about some of these methods on how to get Clickbank products for free.

I got to thinking a few days ago, actually someone suggested this to me.   That there must be an easier way than to just use the search engines to find these product download pages.   With that in mind,  I came up with a small script that would automate the search for me and put all the download pages into a text file for easy reference.

With that, my little script gave me over 1250 premium products from Clickbank.  The product owners should really protect the pages better.  Keep in mind that these are the product download pages, not the products themselves.   The list includes every type of imaginable product that is offered.  Some of the products sell for upwards to over $100. With that said, this list contains over $50,000.00 in clickbank products. 

Now,  I tested some of the download pages that the script gave me, and with my testing, I was getting about a 90 percent success rate of pages that were active.   Keep in mind that all of the links may not work, as the owners may have abandoned the sites or changed the download page.   But, It looks like over 90 percent of the download page links are still active.

So, I decided to offer this list to everyone absolutely FREE.  Whether you are looking for products about making money, weight loss, relationships, travel, or anything else you can think of,  it is probably included in this list of download pages.

 

Get the ClickBank Product download page List here now FREE………

Click the Picture to download…….

12 12 05 Over 1250 ClickBank Products for FREE !!

 

 

BETTER THAN ANY ARTICLE REWRITER

Are you using PLR articles or auto blogging to make money online with Adsense or CPA networks?

Are you getting no Money by the zillions of other marketers using the same content as you on their sites?

Are you Looking for the search engines to treat your articles as unique after you’ve used them on article directories?

Need a way to spin articles on your own site?

Now you can have Unique Content articles and PLR content as posts on blogs whose sole purpose is to make adsense earnings or CPA commissions.

 

 

What if we could stop the search engines seeing duplicate content in our posts…even if we used the same posts on different domain?

 

 

What if your articles were unique every single view?

 

 

What if the same content looked the same to the human eye but NOT to the search engines?

 

 

What if we could rank for the same content on many domains?

 

 

What if we could rule SERPS with the same article without resorting to using free sites like Squidoo and Hubpages where we run the risk of being banned and we were fighting for only 2 entries in Google on sites like Squidoo.

 

 

What if our content was viewed as unique even if another blogger was using the same content as you…even if that blogger is using the exact same article as you?

 

 

What if the content on your Autoblogs was unique to even the original content?

 

And what if this was achievable without 1 bit of content spinning needed. No more unreadable articles or blog posts.

 

Well there is a Solution !!

This  is a unique WordPresss plugin that achieves all of this with a 5 sec install on your blog.

Yep, totally unique content that’s readable in seconds. No rewriting, no copy and pasting, no spinning, no messing with code…just unique blog posts time after time.

 

What are the Benefits of this Plugin……

  • Duplicate Content is NON-EXISTANT
  • Rehash Articles in seconds so that the same content can be used over and over again.
  • Build multiply Blogs with the same Content in record time.
  • Get more traffic to your auto blogs from the SERPS.
  • ZERO Technical know how is needed.
  • You can even be 100% unique even if 1000's of bloggers are using the same content.
  • MAKE MORE MONEY from the increase in Traffic from Search Engines.

It's Simple……

NO EFFORT + NO DUPLICATE CONTENT = $$$$$

 

So what is this going to Cost ?

NOTHING – NADA – ZERO

Get it Below FREE………

Click HERE to Download

OR

 

JUST SEND 20 FRIENDS OR FAMILY TO OUR SITE

Below is your UNIQUE BONUS Link

Just send 20 Visitors to our Site Using your UNIQUE BONUS Link and the Download Link will be exposed at 100 %

 

OR, If you Need the "Eliminate Duplicate Content Plugin" right Now and you Don't have 20 Friends Yet ……

Click HERE

 squeezeheader cash3 CPA Content Wizard   Move over Blackhat Codebreaker

Looking for an EASY way to make

MONEY with CPA networks

 

Now you can Harness the power of WordPress and turn it into a CPA Affiliates Dream

Introducing the CPA Content Wizard

Here is a Blog I set up in less than 10 Minutes using the

CPA Content Wizard

CLICK HERE for Sample Blog

 

The CPA Content Wizard automatically schedules and posts highly sought after content. Then it monetizes each post making you MONEY 24 hours a day, 7 days a week, 365 days a year.

  • Get Highly Sought after Content delivered to your blog automatically.
  • Post that content right away or schedule the posts for a more natural blog presence.
  • Hungry Visitors quickly come to YOUR content with very little promotion (We show you how to get a flood of visitors right away)
  • Monetize each link clicked. MAKING YOU A LOT OF MONEY!!!

 

The CPA Content Wizard is three (3) WordPress plugins in ONE.

blackcheckmark(1) CPA Content Wizard   Move over Blackhat CodebreakerCPA Content Wizard Part 1

Automatic Content Generation

 

Easy Generate Unlimited Content at the push of a button!

The CPA Content Wizard’s RSS Feed Manager lets you publish content to your blog using RSS feeds.

 

  • We show you how to select the BEST RSS Feed for your USE.
  • Tell the RSS Feed Manager how many articles you want to post from that feed each time.
  • You can choose the category (or categories) you want the post to appear.
  • You can select the tags you want added to each post.

The CPA Content Wizard RSS Feed Manager works in the background to update your blog as often as new RSS feed posts are released (the plugin takes care of duplicated entries and removes them before they appear). There is no faster way to build a substantial number of articles without the hassle of writing them all yourself!

You can now have content automatically posted for each one of these categories, promoted for you, at any frequency you wish…without
ever writing a single line…

 

blackcheckmark(1) CPA Content Wizard   Move over Blackhat CodebreakerCPA Content Wizard Part 2

Automated BLOG Poster & Scheduler

There is NO need to manually posting to your blog every again.

 

  • You determine the number of articles and the intervals to post your content making the search engines love you.
  • Post are placed in the category you specify and automatically tagged with the keywords relevant to that post so that the search engines and
    directories eagerly seek out and gobble up your newly posted content!
  • You can easily setup when to publish your content. Days, months or even years into the future, so it always looks like you are taking the time to update and maintain your blog for years to come! (no limit on how many articles you import and line up at once)
  • You can easily manage content waiting to be published (edit, delete)
  • You can manually post content with the click of a button without waiting for the scheduled date.

 

blackcheckmark(1) CPA Content Wizard   Move over Blackhat CodebreakerCPA Content Wizard Part 3

 

Automated Content Protection

 

Now that you have your content and posting on an interval, it time to MAKE SOME MONEY!

Black haters are familiar with the concept of protecting content and forcing the visitor to fill out an offer ($500 gift card, Free Xbox 360, etc.) before getting the download. We have taken this concept and put it on STEROIDS!!!

 

CPAcontent CPA Content Wizard   Move over Blackhat Codebreaker

 

  • The CPA Content Wizard Automatically Locks Down Your Content and forces people to complete your CPA Offers.
  • You can determine the delay between you offers (1-minute wait time or infinity).
  • You determine the length of time your visitor can access your content, via cookies, before they must fill out another offer.
  • You can list as many as five (5) CPA offers your visitor can choose from to complete.
  • No HTML or PHP knowledge needed to create your offers and can get them up and running quickly.
  • You can turn off the CPA Content Wizard’s content protection system with a simple mouse click.

 

Get it Here…….

CLICK HERE

 

 

 

 

"Discover The Lazy Webmasters Way To Making A Fortune From Your Blog In Less Than 3 Minutes!"

 

If you’re looking for a hands-free approach to getting unlimited profits from WordPress blogs, then:

It's So "Set-it-and-Forget-it" Simple To Keep Your Blogs Updated Without Lifting A Finger

Your Answer To End Boring Blog Posting

This AutoBlog  WordPress plugin will help you put an end to boring blog postings once and for all.

It’s true! The program works effortlessly by pulling FULL articles from ArticlesBase article directory to your WordPress blog.

Get fresh new articles each and every day and posts automatically to your blog.

 

Using Auto Blog Plugin vs Getting Content from RSS feeds

Why is getting content from the Auto Blog Plugin a better solution
than RSS feeds?

  • RSS feeds only pulls information to your blog from the first few sentences of various content sites. (usually not more than 55 words).
  • With the Auto Blog Plugin you get FULL ARTICLES to your blog, at least 300 words long!

 

How Using the Auto Blog Will Help You To Make More Profit

 

yellow checkmark Easy Automatic Blog Plugin
You'll have more time to run a successful online business instead of looking for content and posting it to all of your blogs.
yellow checkmark Easy Automatic Blog Plugin With more free time you can start working on more niche projects.
yellow checkmark Easy Automatic Blog Plugin You will have more pages on your websites and also more backlinks and better position on search engines.
yellow checkmark Easy Automatic Blog Plugin If you have used RSS feeds for content, with Automatic Blog your blogs will have more quality content.
yellow checkmark Easy Automatic Blog Plugin With quality content you will have more influence on visitors, can be closer to them and have more opportunity to convert them into your customers.

 

 Download it Exclusively HERE…..FREE….

CLICK HERE to Download

 

 

If you have spent any significant amount of time online, you have likely come across the term  Black Hat at one time or another.

This term is usually associated with many negative comments. This Article is here to address those comments and provide some insight into the real life of a Black Hat SEO professional.  I’ve been involved in internet marketing for close to 10 years now, the last 7 of which have been dedicated to Black Hat SEO. As we will discuss shortly, you can’t be a great Black Hat without first becoming a great White Hat marketer. With the formalities out of the way, lets get into the meat of things, shall we?

 

What is Black Hat SEO?

The million dollar question that everyone has an opinion on. What exactly is Black Hat SEO?

seo white black hat(2) Crash Course in BlackHat SEOThe answer here depends largely on who you ask. Ask most White Hats and they immediately quote the Google Webmaster Guidelines like a bunch of lemmings. Have you ever really stopped to think about it though? Google publishes those guidelines because they know as well as you and I that they have no way of detecting or preventing what they preach so loudly. They rely on droves of webmasters to blindly repeat everything they say because they are an internet powerhouse and they have everyone brainwashed into believing anything they tell them. This is actually a good thing though. It means that the vast majority of internet marketers and SEO professionals are completely blind to the vast array of tools at their disposal that not only increase traffic to their sites, but also make us all millions in revenue every year.

The second argument you are likely to hear is the age old ,“the search engines will ban your sites if you use Black Hat techniques”. Sure, this is true if you have no understanding of the basic principals or practices. If you jump in with no knowledge you are going to fail. I’ll give you the secret though. Ready? Don’t use black hat techniques on your White Hat domains. Not directly at least. You aren’t going to build doorway or cloaked pages on your money site, that would be idiotic. Instead you buy several throw away domains, build your doorways on those and cloak/redirect the traffic to your money sites. You lose a doorway domain, who cares? Build 10 to replace it. It isn’t rocket science, just common sense. A search engine can’t possibly penalize you for outside influences that are beyond your control. They can’t penalize you for incoming links, nor can they penalize you for sending traffic to your domain from other doorway pages outside of that domain. If they could, I would simply point doorway pages and spam links at my competitors to knock them out of the SERPS. See….. Common sense.

 

So again, what is Black Hat SEO? In my opinion, Black Hat SEO and White Hat SEO are almost no different. White hat web masters spend time carefully finding link partners to increase rankings for their keywords, Black Hats do the same thing, but we write automated scripts to do it while we sleep. White hat SEO’s spend months perfecting the on page SEO of their sites for maximum rankings, black hat SEO’s use content generators to spit out thousands of generated pages to see which version works best. Are you starting to see a pattern here? You should, Black Hat SEO and White Hat SEO are one in the same with one key difference. Black Hats are lazy. We like things automated. Have you ever heard the phrase "Work smarter not harder?" We live by those words. Why spend weeks or months building pages only to have Google slap them down with some obscure penalty.

If you have spent any time on web master forums you have heard that story time and time again. A web master plays by the rules, does nothing outwardly wrong or evil, yet their site is completely gone from the SERPS (Search Engine Results Pages) one morning for no apparent reason. It’s frustrating, we’ve all been there. Months of work gone and nothing to show for it. I got tired of it as I am sure you are. That’s when it came to me. Who elected the search engines the "internet police"? I certainly didn’t, so why play by their rules? In the following pages I’m going to show you why the search engines rules make no sense, and further I’m going to discuss how you can use that information to your advantage.

Search Engine 101

As we discussed earlier, every good Black Hat must be a solid White Hat. So, lets start with the fundamentals. This section is going to get technical as we discuss how search engines work and delve into ways to exploit those inner workings. Lets get started, shall we?

Search engines match queries against an index that they create. The index consists of the words in each document, plus pointers to their locations within the documents. This is called an inverted file. A search engine or IR (Information Retrieval) system comprises four essential modules:

A document processor

A query processor

A search and matching function

A ranking capability

While users focus on "search," the search and matching function is only one of the four modules. Each of these four modules may cause the expected or unexpected results that consumers get when they use a search engine.

Document Processor

The document processor prepares, processes, and inputs the documents, pages, or sites that users search against. The document processor performs some or all of the following steps:

Normalizes the document stream to a predefined format.

Breaks the document stream into desired retrievable units.

Isolates and meta tags sub document pieces.

Identifies potential indexable elements in documents.

Deletes stop words.

Stems terms.

Extracts index entries.

Computes weights.

Creates and updates the main inverted file against which the search engine searches in order to match queries to documents.

 

The document processor extracts the remaining entries from the original document. For example, the following paragraph shows the full text sent to a search engine for processing:

Milosevic’s comments, carried by the official news agency Tanjug, cast doubt over the governments at the talks, which the international community has called to try to prevent an all-out war in the Serbian province. "President Milosevic said it was well known that Serbia and Yugoslavia were firmly committed to resolving problems in Kosovo, which is an integral part of Serbia, peacefully in Serbia with the participation of the representatives of all ethnic communities," Tanjug said. Milosevic was speaking during a meeting with British Foreign Secretary Robin Cook, who delivered an ultimatum to attend negotiations in a week’s time on an autonomy proposal for Kosovo with ethnic Albanian leaders from the province. Cook earlier told a conference that Milosevic had agreed to study the proposal.

 

To reduce this text for searching  the following:

Milosevic comm carri offic new agen Tanjug cast doubt govern talk interna commun call try prevent all-out war Serb province President Milosevic said well known Serbia Yugoslavia firm commit resolv problem Kosovo integr part Serbia peace Serbia particip representa ethnic commun Tanjug said Milosevic speak meeti British Foreign Secretary Robin Cook deliver ultimat attend negoti week time autonomy propos Kosovo ethnic Alban lead province Cook earl told conference Milosevic agree study propos.

The output  is then inserted and stored in an inverted file that lists the index entries and an indication of their position and frequency of occurrence. The specific nature of the index entries, however, will vary based on the decision in Step 4 concerning what constitutes an "indexable term." More sophisticated document processors will have phrase recognizers, as well as Named Entity recognizers and Categorizers, to insure index entries such as Milosevic are tagged as a Person and entries such as Yugoslavia and Serbia as Countries.

Term weight assignment. Weights are assigned to terms in the index file. The simplest of search engines just assign a binary weight: 1 for presence and 0 for absence. The more sophisticated the search engine, the more complex the weighting scheme. Measuring the frequency of occurrence of a term in the document creates more sophisticated weighting, with length-normalization of frequencies still more sophisticated. Extensive experience in information retrieval research over many years has clearly demonstrated that the optimal weighting comes from use of "tf/idf." This algorithm measures the frequency of occurrence of each term within a document. Then it compares that frequency against the frequency of occurrence in the entire database.

Not all terms are good "discriminators" — that is, all terms do not single out one document from another very well. A simple example would be the word "the." This word appears in too many documents to help distinguish one from another. A less obvious example would be the word "antibiotic." In a sports database when we compare each document to the database as a whole, the term "antibiotic" would probably be a good discriminator among documents, and therefore would be assigned a high weight. Conversely, in a database devoted to health or medicine, "antibiotic" would probably be a poor discriminator, since it occurs very often. The TF/IDF weighting scheme assigns higher weights to those terms that really distinguish one document from the others.

Query Processor

Query processing has seven possible steps, though a system can cut these steps short and proceed to match the query to the inverted file at any of a number of places during the processing. Document processing shares many steps with query processing. More steps and more documents make the process more expensive for processing in terms of computational resources and responsiveness. However, the longer the wait for results, the higher the quality of results. Thus, search system designers must choose what is most important to their users — time or quality. Publicly available search engines usually choose time over very high quality, having too many documents to search against.

The steps in query processing are as follows (with the option to stop processing and start matching indicated as "Matcher"):

At this point, a search engine may take the list of query terms and search them against the inverted file. In fact, this is the point at which the majority of publicly available search engines perform the search.

Tokenize query terms.

Recognize query terms vs. special operators.

————————> Matcher

Delete stop words.

Stem words.

Create query representation.

————————> Matcher

Expand query terms.

Compute weights.

– — – — – — – –> Matcher

 

Step 1: Tokenizing. As soon as a user inputs a query, the search engine — whether a keyword-based system or a full natural language processing (NLP) system — must tokenize the query stream, i.e., break it down into understandable segments. Usually a token is defined as an alpha-numeric string that occurs between white space and/or punctuation.

Step 2: Parsing. Since users may employ special operators in their query, including Boolean, adjacency, or proximity operators, the system needs to parse the query first into query terms and operators. These operators may occur in the form of reserved punctuation (e.g., quotation marks) or reserved terms in specialized format (e.g., AND, OR). In the case of an NLP system, the query processor will recognize the operators implicitly in the language used no matter how the operators might be expressed (e.g., prepositions, conjunctions, ordering).

Steps 3 and 4: Stop list and stemming. Some search engines will go further and stop-list and stem the query, similar to the processes described above in the Document Processor section. The stop list might also contain words from commonly occurring querying phrases, such as, "I’d like information about." However, since most publicly available search engines encourage very short queries, as evidenced in the size of query window provided, the engines may drop these two steps.

Step 5: Creating the query. How each particular search engine creates a query representation depends on how the system does its matching. If a statistically based matcher is used, then the query must match the statistical representations of the documents in the system. Good statistical queries should contain many synonyms and other terms in order to create a full representation. If a Boolean matcher is utilized, then the system must create logical sets of the terms connected by AND, OR, or NOT.

An NLP system will recognize single terms, phrases, and Named Entities. If it uses any Boolean logic, it will also recognize the logical operators from Step 2 and create a representation containing logical

sets of the terms to be AND’d, OR’d, or NOT’d.

At this point, a search engine may take the query representation and perform the search against the inverted file. More advanced search engines may take two further steps.

Step 6: Query expansion. Since users of search engines usually include only a single statement of their information needs in a query, it becomes highly probable that the information they need may be expressed using synonyms, rather than the exact query terms, in the documents which the search engine searches against. Therefore, more sophisticated systems may expand the query into all possible synonymous terms and perhaps even broader and narrower terms.

This process approaches what search intermediaries did for end users in the earlier days of commercial search systems. Back then, intermediaries might have used the same controlled vocabulary or thesaurus used by the indexers who assigned subject descriptors to documents. Today, resources such as WordNet are generally available, or specialized expansion facilities may take the initial query and enlarge it by adding associated vocabulary.

Step 7: Query term weighting (assuming more than one query term). The final step in query processing involves computing weights for the terms in the query. Sometimes the user controls this step by indicating either how much to weight each term or simply which term or concept in the query matters most and must appear in each retrieved document to ensure relevance.

Leaving the weighting up to the user is not common, because research has shown that users are not particularly good at determining the relative importance of terms in their queries. They can’t make this determination for several reasons. First, they don’t know what else exists in the database, and document terms are weighted by being compared to the database as a whole. Second, most users seek information about an unfamiliar subject, so they may not know the correct terminology.

Few search engines implement system-based query weighting, but some do an implicit weighting by treating the first term(s) in a query as having higher significance. The engines use this information to provide a list of documents/pages to the user.

After this final step, the expanded, weighted query is searched against the inverted file of documents.

 

Search and Matching Function

How systems carry out their search and matching functions differs according to which theoretical model of information retrieval underlies the system’s design philosophy. Since making the distinctions between these models goes far beyond the goals of this article, we will only make some broad generalizations in the following description of the search and matching function.

Searching the inverted file for documents meeting the query requirements, referred to simply as "matching," is typically a standard binary search, no matter whether the search ends after the first two, five, or all seven steps of query processing. While the computational processing required for simple, unweighted, non-Boolean query matching is far simpler than when the model is an NLP-based query within a weighted, Boolean model, it also follows that the simpler the document representation, the query representation, and the matching algorithm, the less relevant the results, except for very simple queries, such as one-word, non-ambiguous queries seeking the most generally known information.

Having determined which subset of documents or pages matches the query requirements to some degree, a similarity score is computed between the query and each document/page based on the scoring algorithm used by the system. Scoring algorithms rankings are based on the presence/absence of query term(s), term frequency, tf/idf, Boolean logic fulfillment, or query term weights. Some search engines use scoring algorithms not based on document contents, but rather, on relations among documents or past retrieval history of documents/pages.

After computing the similarity of each document in the subset of documents, the system presents an ordered list to the user. The sophistication of the ordering of the documents again depends on the model the system uses, as well as the richness of the document and query weighting mechanisms. For example, search engines that only require the presence of any alpha-numeric string from the query occurring anywhere, in any order, in a document would produce a very different ranking than one by a search engine that performed linguistically correct phrasing for both document and query representation and that utilized the proven tf/idf weighting scheme.

However the search engine determines rank, the ranked results list goes to the user, who can then simply click and follow the system’s internal pointers to the selected document/page.

More sophisticated systems will go even further at this stage and allow the user to provide some relevance feedback or to modify their query based on the results they have seen. If either of these are available, the system will then adjust its query representation to reflect this value-added feedback and re-run the search with the improved query to produce either a new set of documents or a simple re-ranking of documents from the initial search.

What Document Features Make a Good Match to a Query

We have discussed how search engines work, but what features of a query make for good matches? Let’s look at the key features and consider some pros and cons of their utility in helping to retrieve a good representation of documents/pages.

Term frequency: How frequently a query term appears in a document is one of the most obvious ways of determining a document’s relevance to a query. While most often true, several situations can undermine this premise. First, many words have multiple meanings — they are polysemous. Think of words like "pool" or "fire." Many of the non-relevant documents presented to users result from matching the right word, but with the wrong meaning.

Also, in a collection of documents in a particular domain, such as education, common query terms such as "education" or "teaching" are so common and occur so frequently that an engine’s ability to distinguish the relevant from the non-relevant in a collection declines sharply. Search engines that don’t use a tf/idf weighting algorithm do not appropriately down-weight the overly frequent terms, nor are higher weights assigned to appropriate distinguishing (and less frequently-occurring) terms, e.g., "early-childhood."

Location of terms: Many search engines give preference to words found in the title or lead paragraph or in the meta data of a document. Some studies show that the location — in which a term occurs in a document or on a page — indicates its significance to the document. Terms occurring in the title of a document or page that match a query term are therefore frequently weighted more heavily than terms occurring in the body of the document. Similarly, query terms occurring in section headings or the first paragraph of a document may be more likely to be relevant.

those referred to by many other pages, or have a high number of "in-links"

Popularity: Google and several other search engines add popularity to link analysis to help determine the relevance or value of pages. Popularity utilizes data on the frequency with which a page is chosen by all users as a means of predicting relevance. While popularity is a good indicator at times, it assumes that the underlying information need remains the same.

Date of Publication: Some search engines assume that the more recent the information is, the more likely that it will be useful or relevant to the user. The engines therefore present results beginning with the most recent to the less current.

Length: While length per se does not necessarily predict relevance, it is a factor when used to compute the relative merit of similar pages. So, in a choice between two documents both containing the same query terms, the document that contains a proportionately higher occurrence of the term relative to the length of the document is assumed more likely to be relevant.

Proximity of query terms: When the terms in a query occur near to each other within a document, it is more likely that the document is relevant to the query than if the terms occur at greater distance. While some search engines do not recognize phrases per se in queries, some search engines clearly rank documents in results higher if the query terms occur adjacent to one another or in closer proximity, as compared to documents in which the terms occur at a distance.

Proper nouns sometimes have higher weights, since so many searches are performed on people, places, or things. While this may be useful, if the search engine assumes that you are searching for a name instead of the same word as a normal everyday term, then the search results may be peculiarly skewed. Imagine getting information on "Madonna," the rock star, when you were looking for pictures of Madonnas for an art history class.

Summary

Now that we have covered how a search engine works, we can discuss methods to take advantage of them. Lets start with content. As you saw in the above pages, search engines are simple test parsers. They take a series of words and try to reduce them to their core meaning. They can’t understand text, nor do they have the capability of discerning between grammatically correct text and complete gibberish. This of course will change over time as search engines evolve and the cost of hardware falls, but we black hats will evolve as well always aiming to stay at least one step ahead. Lets discuss the basics of generating content as well as some software used to do so, but first, we need to understand duplicate content. A widely passed around myth on web master forums is that duplicate content is viewed by search engines as a percentage. As long as you stay below the threshold, you pass by penalty free. It’s a nice thought, it’s just too bad that it is completely wrong.

Duplicate Content

I’ve read seemingly hundreds of forum posts discussing duplicate content, none of which gave the full picture, leaving me with more questions than answers. I decided to spend some time doing research to find out exactly what goes on behind the scenes. Here is what I have discovered.

Most people are under the assumption that duplicate content is looked at on the page level when in fact it is far more complex than that. Simply saying that “by changing 25 percent of the text on a page it is no longer duplicate content” is not a true or accurate statement. Lets examine why that is.

To gain some understanding we need to take a look at the k-shingle algorithm that may or may not be in use by the major search engines (my money is that it is in use). I’ve seen the following used as an example so lets use it here as well.

Let’s suppose that you have a page that contains the following text:

The swift brown fox jumped over the lazy dog.

Before we get to this point the search engine has already stripped all tags and HTML from the page leaving just this plain text behind for us to take a look at.

The shingling algorithm essentially finds word groups within a body of text in order to determine the uniqueness of the text. The first thing they do is strip out all stop words like and, the, of, to. They also strip out all fill words, leaving us only with action words which are considered the core of the content. Once this is done the following “shingles” are created from the above text. (I’m going to include the stop words for simplicity)

The swift brown fox

swift brown fox jumped

brown fox jumped over

fox jumped over the

jumped over the lazy

over the lazy dog

These are essentially like unique fingerprints that identify this block of text. The search engine can now compare this “fingerprint” to other pages in an attempt to find duplicate content. As duplicates are found a “duplicate content” score is assigned to the page. If too many “fingerprints” match other documents the score becomes high enough that the search engines flag the page as duplicate content thus sending it to supplemental hell or worse deleting it from their index completely.

My old lady swears that she saw the lazy dog jump over the swift brown fox.

The above gives us the following shingles:

my old lady swears

old lady swears that

lady swears that she

swears that she saw

that she saw the

she saw the lazy

saw the lazy dog

the lazy dog jump

lazy dog jump over

dog jump over the

jump over the swift

over the swift brown

the swift brown fox

Comparing these two sets of shingles we can see that only one matches (”the swift brown fox“). Thus it is unlikely that these two documents are duplicates of one another. No one but Google knows what the percentage match must be for these two documents to be considered duplicates, but some thorough testing would sure narrow it down ;).

So what can we take away from the above examples? First and foremost we quickly begin to realize that duplicate content is far more difficult than saying “document A and document B are 50 percent similar”. Second we can see that people adding “stop words” and “filler words” to avoid duplicate content are largely wasting their time. It’s the “action” words that should be the focus. Changing action words without altering the meaning of a body of text may very well be enough to get past these algorithms. Then again there may be other mechanisms at work that we can’t yet see rendering that impossible as well. I suggest experimenting and finding what works for you in your situation.

The last paragraph here is the real important part when generating content. You can’t simply add generic stop words here and there and expect to fool anyone. Remember, we’re dealing with a computer algorithm here, not some supernatural power. Everything you do should be from the standpoint of a scientist. Think through every decision using logic and reasoning. There is no magic involved in SEO, just raw data and numbers. Always split test and perform controlled experiments.

What Makes A Good Content Generator?

Now we understand how a search engine parses documents on the web, we also understand the intricacies of duplicate content and what it takes to avoid it. Now it is time to check out some basic content generation techniques.

One of the more commonly used text spinners is known as Markov. Markov isn’t actually intended for content generation, it’s actually something called a Markov Chain which was developed by mathematician Andrey Markov. The algorithm takes each word in a body of content and changes the order based on the algorithm. This produces largely unique text, but it’s also typically VERY unreadable. The quality of the output really depends on the quality of the input. The other issue with Markov is the fact that it will likely never pass a human review for readability. If you don’t shuffle the Markov chains enough you also run into duplicate content issues because of the nature of shingling as discussed earlier. Some people may be able to get around this by replacing words in the content with synonyms. I personally stopped using Markov back in 2006 or 2007 after developing my own proprietary content engine. Some popular software that uses Markov chains include

RSSGM

and

YAGC

both of which are pretty old and outdated at this point. They are worth taking a look at just to understand the fundamentals, but there are FAR better packages out there.

So, we’ve talked about the old methods of doing things, but this isn’t 1999, you can’t fool the search engines by simply repeating a keyword over and over in the body of your pages (I wish it were still that easy). So what works today? Now and in the future, LSI is becoming more and more important. LSI stands for Latent Semantic Indexing. It sounds complicated, but it really isn’t. LSI is basically just a process by which a search engine can infer the meaning of a page based on the content of that page. For example, lets say they index a page and find words like atomic bomb, Manhattan Project, Germany, and Theory of Relativity. The idea is that the search engine can process those words, find relational data and determine that the page is about Albert Einstein. So, ranking for a keyword phrase is no longer as simple as having content that talks about and repeats the target keyword phrase over and over like the good old days. Now we need to make sure we have other key phrases that the search engine thinks are related to the main key phrase.

So if Markov is easy to detect and LSI is starting to become more important, which software works, and which doesn’t?

Software

Fantomaster Shadowmaker: This is probably one of the oldest and most commonly known high end cloaking packages being sold. It’s also one of the most out of date. For $3,000.00 you basically get a clunky outdated interface for slowly building HTML pages. I know, I’m being harsh, but I was really let down by this software. The content engine doesn’t do anything to address LSI. It simply splices unrelated sentences together from random sources while tossing in your keyword randomly. Unless things change drastically I would avoid this one.

SEC (Search Engine Cloaker): Another well known paid script. This one is of good quality and with work does provide results. The content engine is mostly manual making you build sentences which are then mixed together for your content. If you understand SEO and have the time to dedicate to creating the content, the pages built last a long time. I do have two complaints. The software is SLOW. It takes days just to setup a few decent pages. That in itself isn’t very black hat. Remember, we’re lazy! The other gripe is the ip cloaking. Their ip list is terribly out of date only containing a couple thousand ip’s as of this writing.

 
SSEC

or

Simplified Search Engine Content

This is one of the best IP delivery systems on the market. Their ip list is updated daily and contains close to 30,000 ip’s. The member only forums are the best in the industry. The subscription is worth it just for the information contained there. The content engine is also top notch. It’s flexible, so you can chose to use their proprietary scraped content system which automatically scrapes search engines for your content, or you can use custom content similar in fashion to SEC above, but faster. You can also mix and match the content sources giving you the ultimate in control. This is the only software as of this writing that takes LSI into account directly from within the content engine. This is also the fastest page builder I have come across. You can easily put together several thousand sites each with hundreds of pages of content in just a few hours. Support is top notch, and the knowledgeable staff really knows what they are talking about. This one gets a gold star from me.

BlogSolution: Sold as an automated blog builder, BlogSolution falls short in almost every important area. The blogs created are not wordpress blogs, but rather a proprietary blog software specifically written for BlogSolution. This “feature” means your blogs stand out like a sore thumb in the eyes of the search engines. They don’t blend in at all leaving footprints all over the place. The licensing limits you to 100 blogs which basically means you can’t build enough to make any decent amount of money. The content engine is a joke as well using rss feeds and leaving you with a bunch of easy to detect duplicate content blogs that rank for nothing.

Blog Cloaker

Another solid offering from the guys that developed SSEC. This is the natural evolution of that software. This mass site builder is based around wordpress blogs. This software is the best in the industry hands down. The interface has the feel of a system developed by real professionals. You have the same content options seen in SSEC, but with several different redirection types including header redirection, JavaScript, meta refresh, and even iframe. This again is an ip cloaking solution with the same industry leading ip list as SSEC. The monthly subscription may seem daunting at first, but the price of admission is worth every penny if you are serious about making money in this industry. It literally does not get any better than this.

Cloaking

So what is cloaking? Cloaking is simply showing different content to different people based on different criteria. Cloaking automatically gets a bad reputation, but that is based mostly on ignorance of how it works. There are many legitimate reasons to Cloak pages. In fact, even Google cloaks. Have you ever visited a web site with your cell phone and been automatically directed to the mobile version of the site? Guess what, that’s cloaking. How about web pages that automatically show you information based on your location? Guess what, that’s cloaking. So, based on that, we can break cloaking down into two main categories, user agent cloaking and ip based cloaking.

User Agent cloaking is simply a method of showing different pages or different content to visitors based on the user agent string they visit the site with. A user agent is simply an identifier that every web browser and search engine spider sends to a web server when they connect to a page. Above we used the example of a mobile phone. A Nokia cell phone for example will have a user agent similar to: User-Agent: Mozilla/5.0 (SymbianOS/9.1; U; [en]; Series60/3.0 NokiaE60/4.06.0) AppleWebKit/413 (KHTML, like Gecko) Safari/413

Knowing this, we can tell the difference between a mobile phone visiting our page and a regular visitor viewing our page with Internet Explorer or Firefox for example. We can then write a script that will show different information to those users based on their user agent.

Sounds good, doesn’t it? Well, it works for basic things like mobile and non mobile versions of pages, but it’s also very easy to detect, fool, and circumvent. Firefox for example has a handy plug-in that allows you to change your user agent string to anything you want. Using that plug-in I can make the

script think that I am a Google search engine bot, thus rendering your cloaking completely useless. So, what else can we do if user agents are so easy to spoof?

IP Cloaking

Every visitor to your web site must first establish a connection with an ip address. These ip addresses resolve to dns servers which in turn identify the origin of that visitor. Every search engine crawler must identify itself with a unique signature viewable by reverse dns lookup. This means we have a sure fire method for identifying and cloaking based on ip address. This also means that we don’t rely on the user agent at all, so there is no way to circumvent ip based cloaking (although some caution must be taken as we will discuss). The most difficult part of ip cloaking is compiling a list of known search engine ip’s. Luckily software like

Blog Cloaker

and

SSEC

already does this for us. Once we have that information, we can then show different pages to different users based on the ip they visit our page with. For example, I can show a search engine bot a keyword targeted page full of key phrases related to what I want to rank for. When a human visits that same page I can show an ad, or an affiliate product so I can make some money. See the power and potential here?

So how can we detect ip cloaking? Every major search engine maintains a cache of the pages it indexes. This cache is going to contain the page as the search engine bot saw it at indexing time. This means your competition can view your cloaked page by clicking on the cache in the SERPS. That’s ok, it’s easy to get around that. The use of the meta tag noarchive in your pages forces the search engines to show no cached copy of your page in the search results, so you avoid snooping web masters. The only other method of detection involves ip spoofing, but that is a very difficult and time consuming thing to pull of. Basically you configure a computer to act as if it is using one of Google’s ip’s when it visits a page. This would allow you to connect as though you were a search engine bot, but the problem here is that the data for the page would be sent to the ip you are spoofing which isn’t on your computer, so you are still out of luck.

The lesson here? If you are serious about this, use ip cloaking. It is very difficult to detect and by far the most solid option.

Link Building

As we discussed earlier, Black Hats are Basically White Hats, only lazy! As we build pages, we also need links to get those pages to rank. Lets discuss some common and not so common methods for doing so.

Blog ping: This one is quite old, but still widely used. Blog indexing services setup a protocol in which a web site can send a ping whenever new pages are added to a blog. They can then send over a bot that grabs the page content for indexing and searching, or simply to add as a link in their blog directory. Black Hats exploit this by writing scripts that send out massive numbers of pings to various services in order to entice bots to crawl their pages. This method certainly drives the bots, but in the last couple years it has lost most of its power as far as getting pages to rank.

Trackback: Another method of communication used by blogs, trackbacks are basically a method in which one blog can tell another blog that it has posted something related to or in response to an existing blog post. As a black hat, we see that as an opportunity to inject links to thousands of our own pages by automating the process and sending out trackbacks to as many blogs as we can. Most blogs these days have software in place that greatly limits or even eliminates trackback spam, but it’s still a viable tool.

EDU links: A couple years ago Black Hats noticed an odd trend. Universities and government agencies with very high ranking web sites often times have very old message boards they have long forgotten about, but that still have public access. We took advantage of that by posting millions of links to our pages on these abandoned sites. This gave a HUGE boost to rankings and made some very lucky Viagra spammers millions of dollars. The effectiveness of this approach has diminished over time.

Forums and Guest books: The internet contains millions of forums and guest books all ripe for the picking. While most forums are heavily moderated (at least the active ones), that still leaves you with thousands in which you can drop links where no one will likely notice or even care. We’re talking about abandoned forums, old guest books, etc. Now, you can get links dropped on active forums as well, but it takes some more creativity. Putting up a post related to the topic on the forum and dropping your link In the BB code for a smiley for example. Software packages like Xrumer made this a VERY popular way to gather back links. So much so that most forums have methods in place to detect and reject these types of links. Some people still use them and are still successful.

Link Networks: Also known as link farms, these have been popular for years. Most are very simplistic in nature. Page A links to page B, page B links to page C, then back to A. These are pretty easy to detect because of the limited range of ip’s involved. It doesn’t take much processing to figure out that there are only a few people involved with all of the links. So, the key here is to have a very diverse pool of links.

Money Making Strategies

We now have a solid understanding of cloaking, how a search engine works, content generation, software to avoid, software that is pure gold and even link building strategies. So how do you pull all of it together to make some money?

he traffic you send it. You load up your money keyword list, setup a template with your ads or offers, then send all of your doorway/cloaked traffic to the index page. The Landing Page Builder shows the best possible page with ads based on what the incoming user searched for. Couldn’t be easier, and it automates the difficult tasks we all hate.

Affiliate Marketing: We all know what an affiliate program is. There are literally tens of thousands of affiliate programs with millions of products to sell. The most difficult part of affiliate marketing is getting well qualified targeted traffic. That again is where good software and cloaking comes into play. Some networks and affiliates allow direct linking. Direct Linking is where you setup your cloaked pages with all of your product keywords, then redirect straight to the merchant or affiliates sales page. This often results in the highest conversion rates, but as I said, some affiliates don’t allow Direct Linking. So, again, that’s where Landing Pages come in. Either building your own (which we are far too lazy to do), or by using something like Landing Page Builder which automates everything for us. Landing pages give us a place to send and clean our traffic, they also prequalify the buyer and make sure the quality of the traffic sent to the affiliate is as high as possible. After all, we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

Conclusion

As we can see, Black Hat Marketing isn’t all that different from White Hat marketing. We automate the difficult and time consuming tasks so we can focus on the important tasks at hand. I would like to thank you for taking the time to read this.

we want to make money, but we also want to keep a strong relationship with the affiliate so we can get paid.

 

 

 

This is a long term strategy to make $$$ ……….

From a friend, So here it goes…

The following will build a successful site in 1 years time via Google alone. It can be done faster if you are a real go getter, or everyones favorite a self starter.

A) Prep work and begin building content. Long before the domain name is settled on, start putting together notes to build at least a 100 page site. That’s just for openers. That’s 100 pages of real content, as opposed to link pages, resource pages, about/copyright/ tos…etc eg: fluff pages.

B) Domain name:
Easily brandable. You want "google.com" and not "mykeyword.com" . Keyword domains are out – branding and name recognition are in – big time in. The value of keywords in a domain name have never been less to se’s. Learn the lesson of "goto.com" becomes "Overture.com" and why they did it. It’s one of the most powerful gut check calls I’ve ever seen on the internet. That took serious resolve and nerve to blow away several years of branding. (that is a whole ‘nother article, but learn the lesson as it applies to all of us).

C) Site Design:
The simpler the better. Rule of thumb: text content should out weight the html content. The pages should validate and be usable in everything from Lynx to leading edge browsers. eg: keep it close to html 3.2 if you can. Spiders are not to the point they really like eating html 4.0 and the mess that it can bring. Stay away from heavy: flash, dom, java, java script. Go external with scripting languages if you must have them – there is little reason to have them that I can see – they will rarely help a site and stand to hurt it greatly due to many factors most people don’t appreciate (search engines distaste for js is just one of them).

Arrange the site in a logical manner with directory names hitting the top keywords you wish to hit.
You can also go the other route and just throw everything in root (this is rather controversial, but it’s been producing good long term results across many engines).
Don’t clutter and don’t spam your site with frivolous links like "best viewed" or other counter like junk. Keep it clean and professional to the best of your ability.

Learn the lesson of Google itself – simple is retro cool – simple is what surfers want.

Speed isn’t everything, it’s almost the only thing. Your site should respond almost instantly to a request. If you get into even 3-4 seconds delay until "something happens" in the browser, you are in long term trouble. That 3-4 seconds response time may vary for site destined to live in other countries than your native one. The site should respond locally within 3-4 seconds (max) to any request. Longer than that, and you’ll lose 10% of your audience for every second. That 10% could be the difference between success and not.

The pages:

D) Page Size:
The smaller the better. Keep it under 15k if you can. The smaller the better. Keep it under 12k if you can. The smaller the better. Keep it under 10k if you can – I trust you are getting the idea here. Over 5k and under 10k. Ya – that bites – it’s tough to do, but it works. It works for search engines, and it works for surfers. Remember, 80% of your surfers will be at 56k or even less.

E) Content:
Build one page of content and put online per day at 200-500 words. If you aren’t sure what you need for content, start with the Overture keyword suggester and find the core set of keywords for your topic area. Those are your subject starters.

F) Density, position, yada…
Simple old fashioned seo from the ground up.
Use the keyword once in title, once in description tag, once in a heading, once in the url, once in bold, once in italic, once high on the page, and hit the density between 5 and 20% (don’t fret about it). Use good sentences and speel check it  Spell checking is becoming important as se’s are moving to auto correction during searches. There is no longer a reason to look like you can’t spell (unless you really are phonetically challenged).

G) Outbound Links:
From every page, link to one or two high ranking sites under that particular keyword. Use your keyword in the link text (this is ultra important for the future).

H) Insite Cross links.
(cross links in this context are links WITHIN the same site)
Link to on topic quality content across your site. If a page is about food, then make sure it links it to the apples and veggies page. Specifically with Google, on topic cross linking is very important for sharing your pr value across your site. You do NOT want an "all star" page that out performs the rest of your site. You want 50 pages that produce 1 referral each a day and do NOT want 1 page that produces 50 referrals a day. If you do find one page that drastically out produces the rest of the site with Google, you need to off load some of that pr value to other pages by cross linking heavily. It’s the old share the wealth thing.

I) Put it Online.
Don’t go with virtual hosting – go with a stand alone ip.
Make sure the site is "crawlable" by a spider. All pages should be linked to more than one other page on your site, and not more than 2 levels deep from root. Link the topic vertically as much as possible back to root. A menu that is present on every page should link to your sites main "topic index" pages (the doorways and logical navigation system down into real content).
Don’t put it online before you have a quality site to put online. It’s worse to put a "nothing" site online, than no site at all. You want it flushed out from the start.

Go for a listing in the ODP. If you have the budget, then submit to Looksmart and Yahoo. If you don’t have the budget, then try for a freebie on Yahoo (don’t hold your breath).

J) Submit
Submit the root to: Google, Fast, Altavista, WiseNut, (write Teoma), DirectHit, and Hotbot. Now comes the hard part – forget about submissions for the next six months. That’s right – submit and forget.

K) Logging and Tracking:
Get a quality logger/tracker that can do justice to inbound referrals based on log files (don’t use a lame graphic counter – you need the real deal). If your host doesn’t support referrers, then back up and get a new host. You can’t run a modern site without full referrals available 24x7x365 in real time.

L) Spiderlings:
Watch for spiders from se’s. Make sure those that are crawling the full site, can do so easily. If not, double check your linking system (use standard hrefs) to make sure the spider found it’s way throughout the site. Don’t fret if it takes two spiderings to get your whole site done by Google or Fast. Other se’s are pot luck and doubtful that you will be added at all if not within 6 months.

M) Topic directories.
Almost every keyword sector has an authority hub on it’s topic. Go submit within the guidelines.

N) Links
Look around your keyword sector in Googles version of the ODP. (this is best done AFTER getting an odp listing – or two). Find sites that have links pages or freely exchange links. Simply request a swap. Put a page of on topic, in context links up your self as a collection spot.
Don’t freak if you can’t get people to swap links – move on. Try to swap links with one fresh site a day. A simple personal email is enough. Stay low key about it and don’t worry if site Z won’t link with you – they will – eventually they will.

O) Content.
One page of quality content per day. Timely, topical articles are always the best. Try to stay away from to much "bloggin" type personal stuff and look more for "article" topics that a general audience will like. Hone your writing skills and read up on the right style of "web speak" that tends to work with the fast and furious web crowd.

Lots of text breaks – short sentences – lots of dashes – something that reads quickly.

Most web users don’t actually read, they scan. This is why it is so important to keep low key pages today. People see a huge overblown page by random, and a portion of them will hit the back button before trying to decipher it. They’ve got better things to do that waste 15 seconds (a stretch) at understanding your whiz bang flash menu system. Because some big support site can run flashed out motorhead pages, that is no indication that you can. You don’t have the pull factor they do.

Use headers, and bold standout text liberally on your pages as logical separators. I call them scanner stoppers where the eye will logically come to rest on the page.

P) Gimmicks.
Stay far away from any "fades of the day" or anything that appears spammy, unethical, or tricky. Plant yourself firmly on the high ground in the middle of the road.

Q) Link backs
When YOU receive requests for links, check the site out before linking back with them. Check them through Google and their pr value. Look for directory listings. Don’t link back to junk just because they asked. Make sure it is a site similar to yours and on topic.

R) Rounding out the offerings:
Use options such as Email-a-friend, forums, and mailing lists to round out your sites offerings. Hit the top forums in your market and read, read, read until your eyes hurt you read so much.
Stay away from "affiliate fades" that insert content on to your site.

S) Beware of Flyer and Brochure Syndrome
If you have an ecom site or online version of bricks and mortar, be careful not to turn your site into a brochure. These don’t work at all. Think about what people want. They aren’t coming to your site to view "your content", they are coming to your site looking for "their content". Talk as little about your products and yourself as possible in articles (raise eyebrows…yes, I know).

T) Build one page of content per day.
Head back to the Overture suggestion tool to get ideas for fresh pages.

U) Study those logs.
After 30-60 days you will start to see a few referrals from places you’ve gotten listed. Look for the keywords people are using. See any bizarre combinations? Why are people using those to find your site? If there is something you have over looked, then build a page around that topic. Retro engineer your site to feed the search engine what it wants.
If your site is about "oranges", but your referrals are all about "orange citrus fruit", then you can get busy building articles around "citrus" and "fruit" instead of the generic "oranges".
The search engines will tell you exactly what they want to be fed – listen closely, there is gold in referral logs, it’s just a matter of panning for it.

V) Timely Topics
Nothing breeds success like success. Stay abreast of developments in your keyword sector. If big site "Z" is coming out with product "A" at the end of the year, then build a page and have it ready in October so that search engines get it by December. eg: go look at all the Xbox and XP sites in Google right now – those are sites that were on the ball last summer.

W) Friends and Family
Networking is critical to the success of a site. This is where all that time you spend in forums will pay off. pssst: Here’s the catch-22 about forums: lurking is almost useless. The value of a forum is in the interaction with your fellow colleagues and cohorts. You learn long term by the interaction – not by just reading.
Networking will pay off in link backs, tips, email exchanges, and it will put you "in the loop" of your keyword sector.

X) Notes, Notes, Notes
If you build one page per day, you will find that brain storm like inspiration will hit you in the head at some magic point. Whether it is in the shower (dry off first), driving down the road (please pull over), or just parked at your desk, write it down! 10 minutes of work later, you will have forgotten all about that great idea you just had. Write it down, and get detailed about what you are thinking. When the inspirational juices are no longer flowing, come back to those content ideas. It sounds simple, but it’s a life saver when the ideas stop coming.

Y) Submission check at six months
Walk back through your submissions and see if you got listed in all the search engines you submitted to after six months. If not, then resubmit and forget again. Try those freebie directories again too.

Z) Build one page of quality content per day.
Starting to see a theme here? Google loves content, lots of quality content. Broad based over a wide range of keywords. At the end of a years time, you should have around 400 pages of content. That will get you good placement under a wide range of keywords, generate recip links, and overall position your site to stand on it’s own two feet.

Do those 26 things, and I guarantee you that in ones years time you will call your site a success. It will be drawing between 500 and 2000 referrals a day from search engines. If you build a good site with an average of 4 to 5 pages per user, you should be in the 10-15k page views per day range in one years time. What you do with that traffic is up to you, but that is more than enough to "do something" with.

The biggest hidden SECRET is taking ACTION…

What you do today will change your tomorrow. I hope this post is useful and help you start to make a living from the internet.

This Download includes Many top WordPress plug-ins for your blog.

  • Adsense Deluxe: Allows you to easily embed adsense code in your blog posts.
  • alinks: Automatically link certain words in your blog to other blog posts or websites.
  • All-in-one SEO pack: One of the best plugins out there to have your blog SEO optimized.
  • Google analyticator: Easily links your blog with your Google analytics account (if you don’t have one, you can signup free by clicking here).
  • Sitemap Generator: Generates a sitemap for search engines such as Google and more. Great plugin for SEO purposes.
  • Link cloaking plugin: Automatically cloaks and redirects your affiliate links, or any other links you wanna keep private.
  • SEO slugs: Makes URL slugs SEO friendly.
  • Share this: Allows your visitors to spread your post throughout the internet with social networking websites and more.
  • Viper Video Quicktags: Easily embed video from YouTube and other video websites right into blog posts.
  • WordPress Related Post Plugin: Shows other related posts at the end of a post. Great for having users visit more content on your website.
  • WordPress Automatic upgrade: Upgrade your wordpress automatically!
  • wp-cache: Makes your pages load faster.
  • wp-contact form: Gives you a very nice contact form on your blog.

Make sure you unzip the zip file into your plugins folder under “wp-content”. After you upload the plugins, simply click the plugins link in your WordPress admin and install each one.

 

Get the complete package now:

CLICK the Plugin Image to Download

wordpress plugin1 300x117 Great WordPress Plug in Pack

One of the biggest factors of ranking high in the search engines is other websites linking to yours. And one of the most often asked questions  is……. Wait for it……….. How or where do i get them.

Cbox Links – CBox; a Tagboard software that bloggers and website owners can add to their website. It works much like the WordPress comments part of the wordpress script only its a simple shoutbox type script that you can throw a few links onto.
Google query to find an updated list of sites running this vulnerable software: allintext:[get a cbox]

Two Quick free links – Two simple ones that you have probably seen around a lot when your searching for sites is the aboutus.org and the wiki directory , these can just be thrown up in about 1 minute flat, and make sure in the wiki directory you add some related categories for better potential. They should get you indexed pretty fast.

Referer Spamming – Quite a simple method and a software called PRstorm thats found around this board should easily get you started, you add in the urls you want to referer spam, and the ones you want to be linked. This shows up in referer logs and you can easily get a fresh list by searching sites with referer logs with good PR or those sites with the My top referers list. Yes you can find PRstorm around this forum somewhere.

Unlimited free .edu and .gov links – Another great potential is edu and gov links as Google and others give authority over these types of domain extentions.
The ’site:’ feature in Google allows only results with that domain name or domain extension to show up. You can “hack” this feature to allow Google to find the most relevant university and government websites related to your sites.

Heres a few examples.
Google query: site:.gov blog [or site:.edu blog]
Results in: Google finds any .gov website that is running a blog or has a /blog/ directory. You can then visit these blogs and post comments (if you can find wordpress blogs like this one), and get hundreds of free .gov backlinks.
[Alternative queries: ‘blog’ ‘blogs’ ‘wordpress’ ‘comment’ ‘guestbook’ ‘2007′ ‘2006′]

Google query: site:.edu *your niche* + blog
For example: site:.edu internet marketing blog
The top result is a .edu blog that links to a non edu blog, but that blog is related and is PR3 and has edu backlinks. That is also a great relevant place to comment, even if it is not directly a .edu. On the other hand, the third result was a PR3 highly related .edu internet marketing blog with zero comments. That is easy .edu backlinks!

You can easily replicate these queries to fit your needs, and it is highly scalable. You can find .edu, .gov, and if you are lucky, .mil blogs. If you are not as picky, you can just search specifically for the blogs without the .edu or .gov extension, and you can find some high pageranked blogs on the first pages of results. Play around with it, enjoy it, it’s free! Then ofourse you know how to drop a link on the comments.

Edu Guestbooks – Guestbooks can too be quite good for dropping backlinks

RSS Feed Directories – This ones for the bloggers and forum owners too, anything really with an RSS feed, even if you fake an RSS feed and randomize it, it can still work.
Here is a good list of places to submit your feeds or you can use those automated software like RSS Announcer or Submitter, Or Bloggergenerators free blog and ping tools, theres a ton out there.

* Feedest.com
* Postami.com
* 2RSS.com
* FeedsFarm.com
* RssFeeds.com
* Feeds4all.com
* Plazoo.com
* FeedBomb.com
* Page2go2.com
* Feedooyoo.com
* RSSmicro.com
* FeedFury.com
* Octora.com
* FindRSS.net
* FeedBase.net
* RSSmotron.com
* MoreNews.be
* DayTimeNews.com
* Rss-Feeds-Submission.com
* MillionRSS.com
* Yahoo RSS Guide
* MySpace.com News
* ReadABlog.com
* GoldenFeed.com
* BlogDigger.com
* RSSFeeds.com
* feed24.com
* Findory.com
* WeBlogAlot.com
* FeedBoy.com
* Chordata.info
* BlogPulse.com
* DayPop.com
* IceRocket.com
* Memigo.com
* Syndic8.com
* RSS-Network.com
* Feed-Directory.com
* Jordomedia.com
* Newgie.com
* Feeds2read.net
* NewzAlert.com
* Feedcycle.com
* Bloogz.com
* FeedShark.BrainBliss.com
* FeedPlex.com
* RocketInfo.com

Tagbox Linkdropping – Another like the Cbox is Tagbox it works the exact same way.
Powered by Tagbox

Contests – Contests are a good way of word of mouth or bloggers blogging about it, another good example is a giveaway for say the person who sends the most traffic to you, you can use a link trading script to check whos best or a referer script to see which person sends the most, this will bring links to you if you work it smartly.

Digging – Digging the same as any network like such brings an incredible amount of backlinks and gets the buzz around fast.

Digg Comments – Digg comments can get you a few links and the latest comment is always usually at the top. So its an idea maybe to post on the popular diggs.

Commenthunt – Most of you already know this but commenthunt searches blogs without nofollow tags its a search engine type thing you can search for relevent blogs the url is http://www.commenthunt.com

Oggix.com – heres another shoutbox type way of backlinks Check This Query and get dumping links.

A Free EDU blog of your own – Get a free EDU blog of your own or many just by signing up heres the link. Free Edu Blog

Digitalpoint CO-Op Network – Another one that can work in some cases.
Digitalpoint developed a mass link exchange program called the DP Co-op Advertising Network (aff). After signing up, you then add 3-5 links on every one of your pages, and this earns you more linking power (coop weight). The more weight you have, the more links to your site you receive from other members in the coop. You can choose up to 15 anchor texts and there are over 30,000,000 available links in the network today. Sites have been using it to rank #1 for “Debt” “credit cards” “bankruptcy” and “loans.” Such a simple method is allowing them to outrank massive authority sites like Wikipedia, but the main concern is how long will this gravy train last and when will Google do something about it?

Google staff already know about the network, but have not yet done anything to prevent people from quickly ranking for popular terms. I just want to clarify that I would NOT recommend this technique to anyone that is going to be doing a long term link building campaign for their blogs, but I would recommend it for “made for adsense” sites, and even blackhat/greyhat temporary high profit earning sites.

Flickr Spamming – Flickr allows comments on photos taken by other people, now you can go wild and mass comment but i wouldnt recommend it, instead pick suitable pics about your niche and simply write a comment saying something smart like can i use this picture on my blog here, then drop your link, or go wild and do it anywhere.

Article Submission – If you have a product thats going to be released or a new site, its best to get the word around fast, using something like Article Equaliser or something that mass submits to a ton of article directories, this builds fast backlinks but sometimes takes time to get approved, and its best not using spammy type articles but interesting ones work better. And you’ll find that a lot of other sites scrape article sites for there own content. Giving even more backlinks.

Using Software – You can also use software like Internet Business Promoter, I prefer version 8, because it scans the engines for you, and gets links using keywords, to semi-automatically fill link submission forms to niche related directories. It works well and although is a slow process works perfectly

How To Cloak Links The Simple Way

There are people on the Net who honestly think link cloaking is ‘not done’, because visitors can’t see where they are redirected too if the click the link. Hm, that’s a great argument to consider, until…

…your affiliate ID is taken from the link, so you will not earn a commission. In that case, you definitely WANT to cloak your links, don’t you?

Another reason to cloak links is that you want to count how many people have clicked on them. Thus you can discover which links work well and which don’t.

Yet another reason could be, because your (affiliate) links are long and ugly. Especially when using them in text email they take a lot of space and sometimes break up into two or more lines.

So yes, there are valid reasons to mask your links.

There’s a lot of software available that can do the job very well, but there are also easy ways to cloak your links yourself.

One of the ways is to use a simple HTML redirect. You simply add the following line inside the <head> and </head> meta tags:

<meta http-equiv=”REFRESH” content=”0;url=http://www.the-link-to-redirect-to.com”>

Just replace ‘the-link-to-redirect-to.com’ by your link and you’re done.

IF you have log files available and IF they record access to such pages, then you have a counter too. That’s two ‘IF’s’ however.
Also, from the information I read from the SEO experts, this kind of link cloaking isn’t appreciated by search engines, because lots of people have abused this method.

You can also use Javascript to cloak your links quite easy. Here’s an example:

<script type=”text/javascript”>
<!–
window.location = “http://www.the-link-to-redirect-to.com”
//–>
</script>

This redirect doesn’t leave a single trace, but it’s hard to count clicks to this page.

The same applies to this link cloak in PHP:
<?php
header(”Location: http://www.the-link-to-redirect-to.com”);
?>

You can also use a htaaccess redirect to cloak your links and even use a rewrite rule, but, although easy to implement, these solutions are too complicated for this post. I want to talk about an easier one.

You see, I use a simple PHP script that is very easy to set up. No PHP knowledge required. And it counts clicks! Nothing special, but it does its job very well.
Plus, and that’s another advantage of cloaking your links, if the page you’re referring to disappears from the Net, you can easily replace your cloaked link by another one!

Lots of these scripts use a MySQL database to store the information, but this one doesn’t. It’s a simple text file. No need to set up another database!
It’s a great balance between functionality and ease of installation.

Plus…
this script allows you to backup the text file containing your links from the screen, so if anything goes wrong, you’ll have a backup copy at your hard disk.

And you may want to add a rel=”nofollow” to all of your cloaked links, as the refer to your counter script, which is useless in search engines.

Other software that you can check out is Smart Links. It allows you to turn your affiliate links that nobody wants to click on, into cash-generating magnets.

Another piece of very interesting software that is worth checking out is Affiliate ID Manager. It’s a program where you can store and manage all your IDs, passwords, affiliate links and other relevant details in ONE secured software. All your info will be orgazined and available to you so you can concentrate on your work instead of searching for your missing links… This essential tool will save lots of time and money and comes with …..Master Rebrandable Rights !!!

You can try to find this software at Google or receive it  completely Free.

(They are easy to find:  just do a search for the product name!)

So, how about you?
Do you cloak your links and if yes, how?

Subscribe to BlackHatBUZZ




Free Money