Toll Free: 855.708.0740 |

SEO Web Design

How Web Designers, Web Developers, and SEO Collaborate

There has been a long confrontation between web designers and SEO programmers. The conflict arises because both sides are unable to understand the balance between browser view, and search engine crawlers.

Web designers focus on code as it functions for the user, and how websites appear on different browsers.

SEO professionals have a different, more pragmatic view of websites. Crawlers must be able to give keywords maximum weight or else the website is a waste of time and money.

Both sides are correct. However, SEO design is PSD to CSS and uses limited media. Web designers have thousands of lines of code, burying content in a Code to Text ratio that sometimes exceeds 90%.

There is a balance between web design and SEO design.

The BASICS of Search Engine Optimized Web Design

The first thing to understand is that a page can be ranked with a Code to Text ratio as low as 25%, as long as the content section is not buried under hundreds of lines of code. There are several solutions:

  1. Divide a page into files: header, navigation, sidebars, footer, content
  2. Make static pages whenever possible. Crawlers love HTML. Web Designers love PHP. There are many files that can be left in the file manager for SEO purposes
  3. Never put text into an image, if possible, when possible. As an SEO, I’ve seen many pages jump dozens of SERPs, just by changing 2 or 3 images into ‘titles’
  4. Name files with keywords. Naming files Blog, News, Articles, sounds logical, but a website can jump to the top of the search engines when those names are changed to: keyword 1, keyword 2, and keyword 3. This is because it adds the keyword to the URL, improving the keyword weight
  5. Put dynamic and media elements into their own files: Flash, java, mp3
  6. Make sure that each SEO element is on the individual pages – NOT in a header. There are several ways to do this, in any CMS, in most shopping carts.
  7. Run each page through a speed tool. The page must load fast.Google offers several tutorials for lowering a webpage’s download time. Webmasters must understand that a page must load fast or it will lose a major amount of SEO value.https://code.google.com/speed/page-speed/docs/payload.html

    From the Google Webmaster’s site, “strive to keep them (packets) under 1500 bytes”

    NOTE: It is widely denied – but proven on several ‘tests’ that Google only crawls 1 level down.

Everyone in the web design world should read Google’s Starter Kit. It is an easy to download PDF.
www.google.com/webmasters/docs/search-engine-optimization-starter-guide.pdf

URLs and Google: Static vs Dynamic URLs

You can get 2 different answers from Google depending on whether you visit the webmaster’s blog, or the SEO tutorials.

  1. SEO technicians still stick to the truth that Static URLs are better than Dynamic. Google does say that they have no problem with Dynamic URLs, and indeed, they recommend leaving the URL as dynamic:https://googlewebmastercentral.blogspot.com/2008/09/dynamic-urls-vs-static-urls.html
  2. Keep the Dynamic URL structure relevant. Eliminate session and calendar segments. Keep the URL static.
  3. WordPress learned how to do this without losing SEO value – Just copy what they’ve done. However, if you are not skilled enough to do this without corrupting the file, then leave the dynamic URL alone.
  4. Can you use keywords instead of directory1, directory2? This is sometimes done by advanced web designers.
  5. Even Matt Cutts claims that page rank is passed on to Dyamic URLs. However, a quick search around the net, and checking older sites with dynamic urls, shows that this is just not true. Low Page Rank = Low chance of reaching the top search engine pages.

Once Google states this, then they turn around and explain (in their starter PDF) one reason why some websites are at the top of the SERPs and receive very few hits. The URL is part of the ‘sales’ process. Google highlights keywords that appear in the URL when they appear on the search page. Dynamic URLs are clicked less than Static ones.

Why does Google want Dynamic URLs used instead of Static, when there is a definite SEO advantage if the keyword appears in the search results? The answer is simple, and controversial. First,

More on URLs: https://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=76329. I suggest reading these because there are some very important elements (i.e., Google wants to see Hyphens -, not Underscores _ in the URLs) that are not discussed in this tutorial.

The short answer: PAGE RANK flows the same on dynamic and static URLs.

Some Solutions:

Note: Some search engine robots have difficulty with dynamically-generated pages, especially those with URLs that contain long querystrings. Some search engines, even Google, indexes only a portion of dynamically generated pages. Sometimes there is so much code on a page, that is all they index – the code.

Frames, Ajax, Buttons, Drop-Down-Lists, cookies, and Flash cause problems with indexing and are generally best left out of design for optimum indexing.

NOTE: One of the best ways for a web design or SEO company to steal a client is to track down the dynamically created URLs that ‘appear’ to be duplicate content and raise the alarm that there are cloaked websites and major duplicate content problems.

The Facts: The facts are simple. Truth #1: Google wants to know exactly what your content is about – How relevant it is to search and does not want your SEO team to control the top search pages. Truth #2: Google wants their #1 – #3 search results pages to look pretty and useable. Both of these are truths. Your choice needs to be, do you want to support Google with Truth #1, or do you want Google to support your website with truth #2.

URL Issue #2

The second issue with symbols that do not need to be in the URL, like % and apostrophes is that, even if they are accepted by crawlers, they will not be accepted when submitting to search engines, directories, or secondary search engines/bookmarking sites such as Technorati and Digg.

This problem is also caused by URLs that change. The New York Times is the benchmark for this. Try submitting it’s URL and many of the pages will be ‘not found’ because the URL changes.

Too many directories result in URLs that are too clumsy and are often mistaken for spam sites. Multiple directories also make web pages look like they are ‘deep’ in the site – and most search engines only crawl 1 – 2 levels deep.

File Names

File names should involve keywords. These include:

Google tells webmasters to avoid using image1, image2, or file names with the names blog, articles, general, etc…

Dynamic Websites & Functions (Applications)

There are a lot of good reasons to put a website into databases. It makes a cleaner file-manager. Some websites are so massive, it is impossible to organize the files. However, there are some elements that must be addressed.

The most important question to ask is, “How will you structure the pages so that each page can have its own Keywords, description, and title in the header.” This question also includes adding a module to the content management system so users can add the meta tags when they add a new article or blog post.

Search engine robots cannot “choose” from drop down lists, click a submit button, or follow JavaScript. The extra code necessary to script web pages can trip-up the search engine robots preventing them from indexing the page (the whole page).

The long JavaScript forces search engine robots to wade through script to reach the text. Offload your JavaScript and CSS code for quicker access to your source code by the search engine robots, and faster loading time.

Google claims that old code, bad code, and dynamic code does not effect ranking. But, on their Google Starter Page they do claim that they cannot read some dynamic content.

Google has a tool called FETCH by GOOGLEBOT. It is in the webmaster’s tools. If you’d like to test how good a designer you are, verify a website you’ve made and run FETCH BY GOOGLEBOT over it. Don’t be surprised if Google doesn’t find your content & this is common. IF you design for an SEO company – it is vital that you create dynamic code that can be read by Google bot as this tool is often used to discredit an SEO company and is often used to steal clients from a company.

Here is another tool that can be used to see how different search engines crawl a page:
https://www.smart-it-consulting.com/internet/google/googlebot-spoofer/index.htm

Again, all you want to do is make sure that all your content in headers, sidebars, flash, etc doesn’t drive your content to the bottom of the page.

NOTE: Canonical issues must be addressed so that ‘other’ SEO companies cannot accuse us of cloaking and search engine crawlers will not see duplicate content. All posts of a blog or CMS should have a canonical (first post) mod included to tag the page.

PAGE LAYOUT

The important thing for webmasters to understand is, that beyond file structure and URLs – all seo is a template function. CSS and PSDs are favored for themes. PSDs offer better control and color. It is also possible to layout the whole page and have the text still readable, something that cannot be done in formats that change text into ‘artistic text’ or an image such as gif, jpg, png.

Search Engines love CSS and HTML4/5. It is easy to read. There is no code they cannot read. However, search engines cannot read HTML 1 as easily as HTML5. That is why the code MUST be run through the W3C compliance validation tool. The code MUST be SEO friendly.

NOTE: Never – Ever – put CSS on the web page. Crawlers are not meant to read CSS. This code is only used to create a visual web page that organizes content for both search engines (i.e. the content div is at the top of the php file) and people (i.e. the content is on the page where it is meant to appear).

On Page that crawlers have trouble with:

CMS and Blogs

There are several reasons why all websites need a blog and/or a content management system. First, most of the open source ones are already optimized – or have the tools needed to make them optimized. Second, they add new content to a website.

Most people do not understand that SEO is not a onetime task. If a website is not updated weekly, it will not rank high over multiple keywords. There is a bonus for posting weekly. In fact, there is a bonus for posting daily! If a website isn’t updated for 90+ days, search engines wills start penalizing it, even if the world’s best SEO optimized the website to 100% efficiency.

The main reason is that search engines do not drive all the traffic. In fact, some sites, like Myspace, drives more traffic than Yahoo and Bing combined. And, many sites receive 50% or more of their traffic from bookmarking and social networking sites.

The search engines watch these sites. When new bookmarks are posted on sites (like Digg) or a blog is posted and ‘pinged’ the website receives a major ‘boost’ in their search rankings for a few hours. Constant pinging and posting are the best way to reach the top of the search engines.

NOTE: This is only possible if the pages are either static and can be optimized, or if a mod is written that will allow the writers to add meta data to the pages. AND, the meta data must appear in the header.

MEDIA and FLASH

It is a myth that a search engine optimized website cannot be dynamic. In fact, now that media can be filed 2 or 3 levels away from the top of the website’s architecture, and ‘called in.’ A website can have almost anything on their websites IF they follow the page layout explained below.

BOT HERDING – The purpose of CSS

Bot herding is the task of creating a template to control how bots read a web page, and what elements they assign the most value. WEB DESIGNERS CAN PUT THE CONTENT AT THE TOP OF A PAGE! as long as the developer has broken the page elements into different files.

Can you control how bots crawl a page? Yes. Is it worth it? YES, any firm that can verify SEO design has a great advantage over other SEO companies. In fact, the companies who do this are almost impossible to catch up to. One of the first things a good SEO does when confronting a new client is to ‘view source’ of the competitors, and make sure they are not coming up against one of the SEO giants who understand bot herding.

Bot herding is a web design task

Sample
#content {
	margin-top: 20px;
	margin-left: 160px;
	width: 525px;
}
#nav-menu {
	position: absolute;
	left: 10px;
	top: 20px;
	width: 150px;
}
#nav-menu ul {
	list-style-type: none;
	padding: 0px;
	margin: 0px;
	width: 135px;
}

It is not a web development task.
It is not an SEO task.

It is possible to structure a page this way using CSS or HTML if you use <div> codes and absolute positioning:

  1. content.php (or page.php)
  2. header.php
  3. banner.php
  4. navigation.php
  5. right sidebar.php
  6. left sidebar.php
  7. javascripts and flash.php
  8. footer.php
Search engine view Human view with absolute positioning Page Layout
Content Banner/header Top of page
Header Flash in banner Top of page
Banner Navigation Top of page
Navigation Right sidebar Column 1
Right sidebar Content (middle of page) Middle column
Left sidebar Java and flash Java and flash
Java/flash Left sidebar Column 3
Footer Footer Bottom of page

Now, googlebot reads the page it sees the content first. It is easy with absolute positioning and <div> codes.

There are other advantages that a Web Designer and SEO tech can enjoy:

  1. The web designer can use H1, H2, H3 codes and ‘force’ the web page to show the text the way they want in size, font. It is possible to have an H1 title that looks like bold text.
  2. Background images can be used for banners and logos – so the content is the first thing the crawlers read.
  3. PHP web pages and HTML without CSS

W3C Compliance

The web design world hasn’t caught up with the SEO world when it comes to W3C compliance. Today, the idea of making a website ‘compliant’ or validating it seems like a waste of time. However, as the number of hand held devices become web friendly, the option will disappear. At any time, Google can demand that only W3C compliant websites will appear on mobile devices.

At first glance, this may still not seem like a crippling change. After all, most websites are still viewed on desktops. A problem arises when we ask, ‘will wireless modems be read as mobile or desktops?’ It is very possible that a non W3C compliant site (predominately fixed images like jpgs and pngs, or with a lot of PHP code) will not appear on any desktop that has a wireless modem.

We already see that W3C compliance makes a website jump into the top rankings within 1-2 crawls (from SE robots). It is no secret that Google plans to make W3C compliance mandatory on all search engines, not just those created for websites that cater to the vision and hearing impaired. They do not hide this fact. In fact, if you do a broad check of 100 websites, and 200 keywords, you’ll notice that a subtle, but undeniable fact is – most of the websites at the top are W3C compliance – especially when you do not count those ‘grandfathered in’ or with large SEO campaigns.

NOTE: It is important for webmasters to remember that an SEO company needs a website which is crawled 2 or 3 times a month – every 24 hours is better. To achieve this a webpage needs PR3, and needs to be ‘deep crawled.’ Google only crawls 2 pages so making big robot files and massive navigation doesn’t helps. The page must meet the 4 requirements: PR3 or higher, lots of inbound links, validation by Google webmaster tools, new content (in a top layer) every time the website is crawled, and code that is easy to crawl, ie, W3C compliance.

Google will say that W3C compliance does not Effect SEO. Google also says making a website that is NOT W3C compliant is ‘asking for trouble’ when it comes to SEO and Ranking.

Site Architecture

The SEO of a website is usually the task of the Web Designer and the SEO team. The developers need to make sure the URLs are SEO friendly; the pages are broken up so the template designer can utilize them, and the architecture is sound.

Most developers wait until the designer makes a menu, and then develops the structure. This can cause more SEO damage than the SEO team can repair. Most developers are concerned with 1 thing, and one thing only – usability and findability. Page Rank, Indexing, and crawlability are blatantly ignored.

  1. Put the password protected parts of the website Under the top layer
  2. Put any dynamic content that will not be optimized in a sub layer – ie, a shopping cart
  3. Put any content that will be optimized at the top of the website – ie, a blog
  4. Make static pages for any element that needs to be optimized – header, footer, sidebars, navigation, etc.
  5. Put images at least on the second layer and block access from search engines.
  6. Put articles and other pages with more than 50% text in the top layer.
  7. Flat structure is not optimal – as there are many parts of a large website which SEO teams do not want search engines to crawl. It is these ‘sub parts’ that cause duplicate content problems.
  8. Only 2 clicks from the content the user wants is good, but that doesn’t mean that images, flash, and other content which cannot be crawled should be on the same level.
  9. Keep the number of links on one page as low as possible. Over 75%, there is no Page Rank Value passed on. The lower the number, the more page rank value.

Web DEVELOPER:

It is easy if all of these elements are in their own files: header.php, navigation.php, right-sidebar.php, left-sidebar.php, javascript1.php, flash2.php . When a web designer builds a website like this, it makes it 10x easier for a template designer to add the layout to a Content Management System. It eliminates the ‘every page looks the same’ aspect of blogs and CMS design.

SEO SUICIDE: The client loves to have his website on their server where they can take a look and watch the progress. This is SEO suicide. A crawler should never be allowed to see a website until it is finished. It will apply a seemingly unlimited number of penalties that may take 90 – 150 days to ‘start’ repairing. It can take more than a year to undo the damage.

Web DESIGNER/SEO Designer:

The designer now has more freedom when this format is used – and still create an SEO optimized website. They can use any type of user-interface, media, or design they want and the page will still rank high on search engines.

Search engines do not care if a web designer uses absolute positioning, floats, or layers several sections of their website over each other. Even simple tables are allowed (without effecting SEO value) as long as they don’t ‘structure’ the whole page into one box. When using this strategy, the only Search Engine limits the web designer faces are the ‘page loading speed’ and ‘size’. The only ‘real’ limit to a web designer is that their code MUST be W3C compliant – this is no longer optional.

SEO tech:

The SEO technician now comes onto the page and optimizes every element to generate 80% or higher ‘keyword weight’ while using NOFOLLOW and ALT tags to reduce ‘keyword density’. Of course, any SEO is going to rave about CSS and PSD to CSS – but they are flexible.

Once an SEO has a template/layout designed like this, they can go beyond ‘optimization’ of a web page and enter the advanced realm of Page Rank Sculpting and Bot Herding.

The Writer:

They can take advantage of Page Pagination, media, and other tools for ‘human consumption’ without damaging the SEO value of a page.

Summary

  1. There will still be 128 elements on the page that need to be optimized.
  2. There are still 200 elements that are calculated to determine Page Rank
  3. The content to code ratio must still remain above 25% text/75% code.
  4. there will still be an opportunity to write code once, and reduce the amount of time spent coding.

The Benefits Earned

  1. The pages should load faster.
  2. The search engines will index more pages
  3. The pages will pass on Page Rank ‘deeper’
  4. The website will rank higher on the search positions
  5. Search engines will recognize more inbound links because it will be easier to calculate the ‘relevance’ between the off-site and on-site links
  6. The web pages do not need to worry about whether the search engines crawl 1500 characters of code, or 2500 characters of code, because the only thing – on page – above the content is a ‘light’ header.
  7. Websites can be more ‘media rich.’
  8. Search Query Relevancy is determined over 200 factors. Now, Google will be able to see more factors on your web page, and increase relevancy to more search queries.
  9. Now that page rank value ‘evaporates’ when Google doesn’t find relevance, obtaining page rank and building relevance is more important – to retain your hard earned Page Rank.
  10. Only 3% of all websites reach the search positions on a monthly basis. Almost all have implemented the elements described in this tutorial, with the inclusion of having a good server, and private IP address.
  11. Higher Page Rank means that a web page is crawled more often. This gives the SEO company a chance to show more movement in the Search Engine Results Position (SERPs)

Call: (877) 278-0275 and we’ll produce a report that will illuminate the possibilities.