Posts

Search Engine Optimization in the English Court of Sabadell

Wednesday at 7 pm will be held at the Cultural Hall of the English Court of Sabadell, the Search Engine Optimization Course which it is part of the lecture series on Digital Marketing that we are giving in this mall.

In this class we explain the keys to appear in the top positions in searches performed on Google and other search engines.

We explain both how it should be built website to be indexed optimally, as once you have the page indexed, what should we do to improve our position.

We discuss in depth the concepts of density, prominence, reliability, PageRank algorithm and many others related to the improvement of SEO issues. Also we explain what tools are at our disposal in both internet to track our position as to improve it.

Attendance is free but seating is limited.
If you wish to attend please contact the English Court of Sabadell: tel. 937284800 ext. 3240

See you there.

If you can not come ... You may be interested this:

PTT presentation to the conference will be illustrated: Search Engine Optimization (in Catalan)
Free search engine optimization course via the website GEAIPC: Search Engine Optimization

Introducing the Digital Marketing Guide Manlleu

Cámara de Comercio de BarcelonaToday has been presented in Manlleu, the headquarters of Barcelona Chamber of Commerce held in this city, the Digital Marketing Guide published by this camera and PIMESTIC.

Part of the presentation consisted of a mine paper entitled "9 Techniques to attract visitors to a Website". In this talk I explain 9 ways to attract traffic to a website.

The presentation was a success, the room was full, companies have asked questions and out, people have been enough to talk to me and to inquire about PIMESTIC aid for companies.

As always, I have perhaps entertaining too in search engine optimization, but since it is the cheapest technique number 1, the most effective and, I think it's worth explaining a little more than time allows me ... not today it has been an exception. Also explaining how optimizing an SEM campaign I think I've become too long ... but again, people asked and asked for more. How can I refuse?

Total, instead of one hour, my presentation lasted 2 hours ... but no one has complained, nor has gone before completion, so I think they have found useful and immediately applicable in their business.

If someone in the public visiting the blog, here are the links related to the paper:

Next: Penedes Vilafranca on 31 March at 9h30 ', at the headquarters of the Barcelona Chamber of Commerce in this city.

Obama and the White House Robots.txt

It has been talking a lot about how Barack Obama has used the Internet to publicize his candidacy and for mobilizing voters. It has also been commented on many blogs the ambitious technology plan Obama for America (you can read here).

But one of the things that has caught more attention and few people have noticed: the change that has suffered the Robots.txt the website of the White House, very much in line with what Obama preaches.

What is Robots.txt?

It is a text file containing instructions on pages and not visitable visitable by Robots, a web page. That is, it indicates which parts of the website should not be scanned by robots.

Normally, it is content that appears on the website, but only want it to be accessible to people who surf the web, you do not want it indexed content appears in search engines. It is also used when a content manager creates duplicate content and thus penalized by search engines.

This file is created following the instructions can be found here: RobotsAnd all the robots that follow the "Robots Exclusion Protocol"Undertake to heed these instructions.

If a website has created this text file, robots understand that they can index it (although to be sought from that page robots.txt robots generate a 404 and therefore, it is recommended that a blank page is created and FTP upload is named Robots.txt to thus generated 404 on the page will be real and can be released by the webmaster).

Let's return to the White House Robots.txt

Until a few days ago, when explained in class what a file Robots.txt and what is the "Protocol Robots Exclusion"I put several examples to illustrate the different types we can create robots.txt to instruct robots indexers:

  • A blank page robots.txt
  • Robots.txt A page with more or less "normal" instructions
  • A robots.txt page completely exaggerated and misplaced.

Well ... Obama has me "saboteado"Examples and it loaded my example of malpractice in a matter of Robots.txt: The webmaster of the new website of the White House has created a new Robots.txt well done, clear and concise.

The webmaster of George Bush Jr., had created a robots.txt with thousands and thousands of pages with forbidden access to the robots. Or say fits that ... there was nothing interesting in that content (once he had dedicated me to go read what they did not want it to be indexed ... pictures of the first lady, speeches, etc ...). But it showed well that the White House had a somewhat archaic what is internet and publishing content concept.

The new websmaster, in this sense, is shown to have much clearer what should be the website of an institution like the White House.

It is worth ... but how was this Robots.txt?

Fortunately, in the slides of my classes always I include screenshots of what I explain, not fail me the internet or where class have no connection ... (how sad to have to always think about this possibility).

So under these lines (at end of post) include the image I have filed and that now becomes history ... (Look at the bar scroll the pantallazo... It is the one that shows the magnitude of the listing)

The current robots.txt page can see it by clicking here: Robots.txt of Casablanca with Obama .

For more information about how to create a Robots.txt or what it is, you will find here: Robots.txt and in the Free Course Search Engine Optimization our website: Search Engine Optimization Course

Robots.txt of Casablanca

Open Conference cycle: "The 16 things you need to know to sell online"

Today I had the pleasure of teaching this class of Digital Marketing at University Graduate Institute, an institution that teaches master's degrees online for professionals, and was created by Santillana Training together with the Universities of Alicante, Carlos III of Madrid and the Autonomous from Barcelona.

The class has been taught in video format and is part of Open cycle Conferences taught by speakers of prestige University Graduate Institute (IUP). Class recording will be soon on the website of the IUP (www.iup.es)

Todo se ha desarrollado según lo previsto, exceptuando la duración de la clase, que ha sido un poco más larga de lo que teníamos en mente… y es que me cuesta mucho sintetizar tanta información en tan poco rato. Sobre todo cuando tocamos temas que me apasionan como el SEO, la optimización de campañas SEM y la Usabilidad. No hay manera en que pueda ser más concisa y no ir tanto al detalle… aun así, siempre me quedo con la sensación de que podría haber explicado más cosas. Pero el tiempo es el tiempo… la clase de Marketing Digital completa es, como mínimo, de 20 horas… no hay manera de poder sintetizar esto en una hora, ni leyendo únicamente el índice del temario. De ahí que la clase de hoy se haya limitado a las 9 técnicas básicas para atraer tráfico a una página web, y hemos tocado un poco por encima las 6 técnicas básicas para convertir las visitas en contactos comerciales.

I hope the students have learned apart from the existence of these techniques, have been wanting to expand the information and now are looking for additional information.

Here is the presentation used in class: Digital Marketing - Basic Techniques.

Then you have the link recording I hang here too.

I Roundtable web browsers: marketing and search engine optimization

On Tuesday 28 October, at 18:30, will be held in the Auditorium of the building Ramblas Universitat Pompeu Fabra (Ramblas 32, Barcelona), presenting a new edition of the Master in Search Engine Optimization and Digital Marketing and as a panel discussion on web Search.

I have the pleasure of sharing roundtable Fernando Macia from Human Level Communications, Who will talk about SEO and Christopher Rovira Research Group UPF DigiDoc to discuss finders training and research. I will discuss SEM (Search Engine Marketing) and explain "6 techniques that will help us optimize our search engine marketing budget".

Each of us will present his vision on search engines and share related experiences market intelligence, content strategy and brand positioning.

At the end of the presentations, a discussion in which attendees can ask their questions will open.

More information about the event:

See you!

More information about the event: Roundtable Search Engine Optimization

This is the presentation with which I illustrated my talk:

We talk to each other.

AJAX, a technique used sparingly

This article explains what AJAX is, when to use and what contraindications. We also show how to overcome some of the contraindications.

What is AJAX?

AJAX stands for: Asynchronous JavaScript And XML. That is, the combination of JavaScript and XML asynchronously.

It is a technique developed for interactive Web applications, which consists of a set of three existing technologies work together effectively.

These technologies are:

  1. (X)HTML y CSS (Cascading Style Sheets) to give a structure and present the information on the website.
  2. JavaScriptUsed for dynamic interaction with data.
  3. XMLUsed for interaction with the web server. Although it is not necessary to use XML simpre with AJAX applications, as for example, plain text files can also be stored information.

Like DHTML, AJAX is not itself an independent web technology, but a term that encompasses the three aforementioned technologies.

What is Ajax?

Ajax is used to make changes to a web page at the user, without having to reload the whole page again.

For example, on a web page the user requests any information that is offered from the same web page (such as a description of a product) and click on the link on the same page (without loading it again) the requested information appears .

The process of displaying HTML data made entirely consume significant bandwidth, since all HTML should again be loaded to only show the changes. Instead, an AJAX application is much faster and consumes bandwidth.

The JavaScript used in AJAX application is a dynamic, able to make changes to a Web page without re-charge language. AJAX makes sure that only the necessary information is requested and processed, using SOAP or other Web services language loosely based on XML.

Hence a technical level, 3 advantages are obtained: one time charge much lower, saving the user bandwidth and server load much less where the website is hosted.

AJAX problems

Problems with search engine indexing:

AJAX is used by Google, Yahoo, Amazon and a lot more than search engines, portals and content creators, but not general use and massive as some think. Google, for example, which supports webmasters to use AJAX in their programming, uses it himself in GMail, Google Suggest, Google Maps, but not absolutely all your web pages.

The problem with AJAX is that the content displayed within the application using AJAX, not indexed in search engines. This is because spiders (spiders) search engines are not able to interact with the AJAX application and get the command that displays the content is activated.

Hence, it is a bad idea for example, create a list with the names of our products and make an AJAX application by clicking on a product name, product description and photograph is displayed to the right of the list. If we do this, descriptions of products and their images will not be indexed or Google or any other search engine.

Although not all bad news, certain ways of working with AJAX itself that index, for example, playing a show or not show content using positive and negative margins. So just to bear in mind when scheduling if spiders may pass or not can spend.

Accessibility problems:

If we start from the basis that our website should always be accessible to all types of browsers and users and should at least meet the standard A W3C (http://www.w3.org), We find that most scripts that improve appearance and interactivity of a website have accessibility issues. The AJAX also has them.

As we have seen at the beginning of this article, the use of AJAX involves using JavaScript, and some browsers do not support this type of programming. Although as we shall see it is solvable.

But keep in mind that a large part of AJAX applications that we find in the libraries that exist on the Internet have not corrected this problem and therefore are apps that do not meet the W3C standares (at the end of these lines provide links to libraries code and articles dealing with the issue of accessibility and AJAX).

AJAX, to use sparingly

As we have seen in previous section, although AJAX applications provide dynamic, interactive and reduced bandwidth to a website, they also have drawbacks to search engine indexing level and level of accessibility. Therefore, we must consider and neutralize the following:

  1. If we use AJAX on our websites, we must be aware that the content displayed within the AJAX application will not be indexed by search engines. To remedy this detail, we can create this redundant content and make it accessible to the spiders through a sitemap or through links in the footer of the website.
  2. If we use AJAX to make our website interactive, we must keep in mind that will not meet the Level A accessibility, unless we use the code libraries adopted by the W3C or means to surf the web without using JavaScript.

related links

New information on Google indexing AJAX (March 2010): http://code.google.com/intl/es/web/ajaxcrawling/

Examples of Web pages that use AJAX and AJAX code libraries for use by webmasters:
http://ajaxpatterns.org/Ajax_Examples

Articles which explains how to get AJAX code that does comply with the level A W3C accessibility:
http://www.maxkiesler.com/

List common accessibility errors:
http://www.w3.org/TR/WCAG20-SCRIPT-TECHS/#N11799

Google updates the PageRank value of showing their bars

Google PageRank update in May 2007.

As planned, this weekend Google has updated the PageRank displayed in the Google bar that users have installed on their browsers. Google only updates this information every four months.

What is PageRank and its importance in the sorting algorithm results?

PageRank is the algorithm that Google uses to give a numerical value to the importance of a web page. This value is used as part of the algorithm that sets the order in which search results are displayed on Google.

The PageRank is named in honor of its creator Larry Page. PageRank does not mean "ranking of pages".

The purpose of PageRank is to assign a numerical value to web pages according to the number of times the recommended and other pages according to PageRank having these pages. That is, it establishes the importance of that website.

From January 24 (day when Google changed its sorting algorithm to neutralize some of the pitfalls that performed webmasters unscrupulous) in PageRank also affects the reliability of the website ... (but this does not explain now, but will be subject of another article).

How do the updates ?: difference between the actual PageRank and PageRank Google bar work

The PageRank shows the Google bar only updated once every 4 months or so.

The bar shows a PageRank base 10 on a logarithmic scale. That is, it is easy to climb from 0 to 1 or 2 to 3, but instead is very difficult to climb from 5 to 6, and even more up 6 to 7. But this is not the actual PageRank of our website, but value that Google assigned the last time you updated the PageRank bar.

The last update was carried out on January 24 and this time it has done on May 1, a few days before they met the mandatory four months.

During 2006 there were 4 updates PageRank: in February, April, July and late September. In other words, on 4 occasions during 2006 Google has calculated the value of PageRank in base 10 and has exported to the servers that power Google bars. During 2007 he is following the same pattern.

The PageRank that Google uses for its calculations is much more accurate and uses a much larger scale, we do not know how, and Google is a complete secrecy in this regard, although it seems that is based 100. Their internal servers update it daily.

When is the next update?

If all goes well, we should expect it by early September. So all actions to increase the PageRank we perform from now, will not be reflected in the Google bar until September.

This does not mean that before September, our actions are useless. Nothing is further from reality. Recall that Google works with a PageRank in real time.

How to know the PageRank in real time?

The exact numerical value and PageRank, we can not know in real time, but we can make an approximation to the real PageRank, although it is in base 4 instead of base 10 and relative values.

Permanent Link: Learn how to be a substitute Google PageRank: the TRUST RANK

Not much what we get with this, but at least we will know if we have PageRank assigned on every page, and we'll see, if you are increasing the number of pages of our web passing from medium to high, or low to medium.

In my bar I do not see Google PageRank, what I have to do?

By default, the Google bar does not include this information, but from bar options can include PageRank, and thus, while browsing the network will know the PageRank of the pages you visit. This will help you know what websites should try to include your links to increase your PageRank.

Links of interest:

How I can improve my PageRank website ?:
http://www.geamarketing.com/posicionamiento/mas_pagerank.php

Free Online Course Search Engine Optimization:
http://www.geamarketing.com/posicionamiento_buscadores.php

How indexes the https?

Https indexing is one of those mysteries that makes life more interesting SEO. While we know that it is possible to index it in most search engines, hardly anyone knows how to get it in the shortest possible time.

What is https?

The https is the secure version of the http protocol. The difference between one and the other is that the former transmits the encrypted data, and the second transmits unencrypted.

The system uses https based on Secure Socket Layers (SSL) encryption to send information.

The decoding of the information depends on the remote server and the browser used by the user.

It is mainly used by banks, online stores, and any service that requires sending personal data or passwords.

How does the https?

Contrary to what many people think, the https does not prevent access to information, only the encrypted when transmitted. Hence the content of a web page that uses the https protocol can be read by search engine spiders. What can not be read is the content that is sent from the website to your server, for example, the login and password for access to a private area of ​​the website.

The standard port for this protocol is 443.

How do we know the https is actually indexed?

Google indexes https since early 2002 and gradually other search engines have adapted their technology to also index the https.

The last search engine to do so was MSN, he got it in June 2006.

If we look for "https: // www." Or inurl: https in major search engines, we find https pages indexed in them.

How can we index our https?

In principle, naturally we can index our https pages, but as this protocol transmits information much slower, spiders sometimes fail to download the pages in the time they have established and will not index it. This is the main problem that we can find. We will resolve trying to reduce the download time of these pages.

How can we accelerate the indexing of https

There are two techniques:

  1. Google Sitemap: Include our sitemap https pages (we refer to the google sitemap, sitemap to not to humans), and register it in google sitemaps.
  2. Guerrilla: Internet links spread all over to go to our https pages, and thus achieve the spiders that are indexing the pages where the links have also come into the https part of our site.

How can we make our https being indexed

It is not as easy as it looks. It does not serve to include in our robots.txt https pages. Each port requires its own robots.txt, so we create a robot.txt to our http pages and another for our https pages. In other words, we also have a page called

https://www.nombredelapagina.com/robots.txt

If you need help or de-index to index your pages https, please contact us. We will encatados to assist you.

Additional information:

Blog MSN about indexing - Article where they explain that MSN index starts https
http://blogs.msdn.com/livesearch/archive/2006/06/28/649980.aspx

Information about how Google not index https:
http://www.google.es/support/webmasters/bin/answer.py?answer=35302

More information about Google Sitemaps:
SiteMaps de Google
http://www.geamarketing.com/articulos/Descubre_indexacion_futuro_Google_SiteMap.php

online course, free, search engine optimization: Course search engine positioning
http://www.geamarketing.com/posicionamiento_buscadores.php

The web of BMW driven from Google ... could happen to you?

It is the story of the week: German BMW website has been expelled from Google.de for practicing spam seekers.

It has been the Blog Matt Cutts who has revealed this expulsion. Matt is a Google employee who writes one of the best SEO blogs with content network. Obviously, Matt does not reveal anything that Google does not want, but at least the information it provides is always first hand and comes directly from its source.

Let's see what happened ...

Some weeks ago Matt commented that Google will be much harder to spam search engines and that between February and March will change their way of indexing sites, to combat it. It will not change the algorithm, but their spiders seek spam and will report for disposal.

The problem of spam is becoming a nightmare for the major search engines and the BMW case is not an isolated case. Many webmasters think they can fool Google and other search engines using keywords in hidden or camouflaging their texts code.

Many times, browsing Google results pages that you find are not positioned correctly ... but not be good ... if not the reverse. You might wonder how a page as "seedy" and with such poor content, can be in the first position by a search with more than five hundred thousand results. If you look well the code, you just found the reason. The case of BMW is also hidden code, now can not see if the image is not showing us Cutt, but there are still many pages that practice spam and that Google has not detected and expelled.

Consider an example where you can still see the hidden code:

www.todoalarmas.com

If we Google "home alarm" will find 996,000 results. This page is first. If you enter it you will see there is no apparent reason to fill this position. But if you edit your source code, you will discover why it is in first position: a hidden in a "no script" with more than 3000 words text.

Note: you will not see your code if you click on the right mouse button and you give to see code ... (that already charge them you can not do), but see your code if you go to the top menu bar and you click on: see >> Source Code.

We'll see how long they last ...

Looking whether or not Google disappear, we can also tell when Google has activated the antispam system indexing.

... And BMW: the BMW have already apologized to Google and Google and has again put on the list of sites to be indexed, so in the next update your pages will be indexed again. But it takes time (see months) to index an entire web again, with all its pages. (Unless you use the "site map" of Google to do so, which I do not know if BMW will ... we'll see).

The moral of all this is: Do not try to fool Google and focus on building good pages and have interesting content that other websites that get you recommend (this will make up the PageRank). Make a plan and stick to Digital Marketing.

The moral 2 would be: Really seekers permanently influence the success or failure of the web pages ... otherwise, BMW would not risk being expelled by such a theme, and many other web pages.

Additional information:

Article where we explained what the search engine spam and possible resolution of Google against him to include the Trust Rank algorithm to refine the Page Rank:
Find out how to be a substitute Google PageRank: the TRUST RANK

Article where we explained what the service "site map" on Google and where how it works: Discover the indexing of the future: Google SiteMap

Text camouflaged by BMW:
http://www.mattcutts.com/blog/ramping-up-on-international-webspam/

Free search engine optimization course, that will not get you the expulsion: Online Course Search Engine Optimization