Posts

Discover the 3 reasons why Microsoft enters the market of ads search engine

Microsoft announced last week that it had developed its own set of text ads on search results MSN Search. According to Microsoft this system is much better than its rivals Google and Yahoo as it will serve ads based on gender, age and location of the user.

There are three clear reasons why Microsoft is conducting this movement:

  1. For Microsoft, Google is becoming a real threat. Thanks to the revenue that Google is getting its adwords, they are funding many initiatives in the field of programming with free software and applications that have little to do with Windows and compete directly with Microsoft products, such as Gmail vs. Hotmail.
  2. It is obvious that the market for text ads included in search results is a great source of income. Google is a living example, has announced a net profit of 342.8 million dollars during the second quarter 2005.
  3. In a way, for some years, MSN has been forced to share revenue with YahooOne of its main competitors. Until now, the ads MSN offers on its website are offered by Overture, a company that since March 2003 belongs to Yahoo, so Yahoo gets a commission for each sale of MSN.

According to Microsoft the service they offer will be much more attractive than Google and Yahoo for advertisers because it will provide segmentation by gender, age, location, user, time display ads and other parameters that Microsoft knows about its users.

Microsoft has been latecomers to the world of search engines, but it looks like it is doing completely, but step by step and not risking.

MSN Bot

El primer movimiento fue en verano 2003 momento en que lanzó su recién programada araña MSN Bot a escanear toda la red, cuando en sus portales aún utilizaba el buscador Inktomi (propiedad de Yahoo desde ese mismo año y por el que Yahoo pagó 235 millones de dólares). Durante 2003 Yahoo ingresó gracias a MSN 5,3 millones de dólares por el uso de su buscador.

A mediados del 2004 Microsoft lanzó la versión beta de su propio motor de búsqueda y a finales del 2004 dejó de utilizar definitivamente Inktomi, para pasar a ofrecer sus propios resultados de búsqueda. Desde entonces lucha por posicionarse entre los mejores portales de búsqueda. Aunque el mejor activo de MSN Search no es su algoritmo de ordenación de resultados (como lo es en el caso de Google) sino el hecho de que muchos de los usuarios de Windows no saben cómo cambiar la página de inicio de sus navegadores de Internet, ni tampoco cómo cambiar el motor de búsqueda que el MS Explorer lleva instalado por defecto. Así que no es de extrañar que MSN sea la web número dos del mundo en tráfico (la uno es Yahoo, la dos MSN y la tres Google).

After getting your own search engine, the next logical step is to exploit Microsoft itself the economic potential offered by search engines and that Microsoft has failed to see until Google and Yahoo have been presented every year positive economic results.

It seems that the first sites of Microsoft in testing this new ad system will MSN MSN Singapore and France. Then, it will spread to other countries.

We will be watching when this occurs, to perform segmentation and analysis of acceptance and subsequent expansion of the MSN network of advertisers.

Links:

Alexa Ranks (http://www.alexa.com/site/ds/top_sites?ts_mode=global&lang=none )
World Ranking Web by number of visits and page views.

Discover the indexing of the future: Google SiteMap

Google proposes what will be the new way to index web pages.
Search engines like Google and Yahoo, use spiders to gather information from the web pages published on the Internet there. Once you have the information, process to quickly sort search results, based on a specific algorithm, when a user goes to their websites and asks any term or a phrase.

The search engine spiders regularly visit websites that are published on the Internet and automatically update information about their content.

So far, spiders came into the root directory of a domain, sought the robots.txt file to ensure that the site wanted to be indexed and then proceeded to visit all the links found on the website, thus recording the content of the page.

Google SiteMaps will revolutionize this form of indexing the web pages.

No es sólo que Google ahora lea con más detenimiento los mapas del site que la gente incluye en sus páginas web… no es nada de esto… es una nueva forma radical de indexar el contenido de las páginas. Google nos propone la creación de un sitemap en XML siguiendo unas especificaciones determinadas que darán toda la información a sus arañas y que les permitirá el acceso a urls que hasta ahora podían haber estado escondidas por diversos motivos ajenos a la voluntad de los webmasters.

Google wants to access the content of the web pages of the easiest and most efficient way. As it stands now raised indexing pages, even being much more efficient than human rates we had old (who does not remember going to a search engine, be inserted by hand the definition of our site, keywords why we wanted to be found and the site URL ... but this is prehistory internautical), which Google presents us is now much better.

Everything is to make available spiders a special sitemap.

To create this sitemap, enough to have an application that is installed on our server (there are versions for all operating systems) and creates a site map in a certain format. The application proposes Google can generate a map from the URL of the website from the directories of the website, or from server logs (ideal for dynamic pages).
Once we have the sitemap done according to the specifications of Google, we can register it in Google Sitemaps. Automatically and in less than four hours, Google will be indexed.

Google allows webmasters to create a cron to generate a new map to every hour (for sites with lots of content renewal) and make the map automatically submit Google Sitemaps. In this way, the spiders will know immediately the new pages created and may be incorporated into the index.

Advantages of this application:

No matter how bad you have the web page level paths for spiders ... with a site map created by the Sitemap Generator, Google spiders always find the url of all your pages.

Another great advantage is the quick content indexing the entire site. In less than 4 hours, the spiders have visited up to 50,000 links on our website. For websites with more URLs, Google recommends various sitemap and have an index of sitemaps.

Disadvantages of this application:

It requires some programming knowledge, so that either ISPs offer this service as added value for your customers or many websites will not have that service and should remain indexed by ordinary spiders.

The sitemap that are already available in most web pages are not compatible with the format of Google. Google want an XML document with certain specifications.

With this project, Google seeks undoubtedly how to improve the indexing of web pages and to have in their indexes with pages that until now were lost in a sea of ​​links within our sites.

Google has created the Sitemap Generator and indexing service Express and offers completely free ... it will be interesting to see the reaction of Yahoo at this, because Yahoo offers service fast indexing payment of $ 49, $ 20 or $ 10 according to the number of url we want to index on an accelerated basis.

Currently there have firsthand results regarding the effectiveness of indexing through Google sitemap. Once we installed the new sitemap on various websites and we are ready to make comparative increase in number and frequency indexed spiders visititas pages, write a new article reporting the results. See you then.

later noteA few months have passed since we wrote this article. The results have been very good. A whole new website is indexed in less than 24 hours. It is ideal for when a new site goes to the network. The can be indexed at a time, without having to wait months and months for Google spiders read its entire contents.

Additional information:

URL with information about Google sitemap:
https://www.google.com/webmasters/sitemaps/docs/en/about.html

URL with specifications about Google sitemap:
https://www.google.com/webmasters/sitemaps/docs/en/protocol.html

8 key factors to overcome your opponent in Google

No one can claim to know the algorithm used by Google to sort search results, but it is relatively easy to investigate what factors are involved in it and to what extent affect the algorithm. In addition, the network will find much literature on the subject and you'll be extending knowledge if the topic you are passionate about.

This article presents 8 key factors that will help you know why other sites are above yours, and how you can overcome them.

1. Decide what words are going to concentrate your efforts

You can not fight for many words at a time, so concentrate your efforts in about 10 words or phrases that you think can be searched in Google for your target audience.

Begins the analysis that will lead to success making a list of the top 5 websites listed in the top results when looking for those 10 words.

Browse through the 5 pages that will appear. Make special attention to discover what words are targeting them.

2. Find out where are located the words for which you want to fight

Look carefully at where they are placing keywords.

Google gives more importance to words that are located in certain parts of a web page. The most important part is the URL (the address of your website), the following is the tag <title>, the following are the headers <h1>, <h2> and <h3>, then come the words that are links to other pages, and the importance is diminishing, although it is always higher than the plain text, if the words are bold, italicized, is part of a <alt> (alternative text on images), etc ...

3. Find out what keywords density have

Keep in mind a few things:

Google (y el resto de buscadores) funcionan por densidad, no por valores absolutos. Así que si en tu URL o en tu título tienes 50 caracteres y 9 coinciden con los que el usuario busca, el valor de tu URL o de tu título es de 9/50. Así que procura no poner textos superfluos o urls con tropecientos números que corresponden a una sesión de usuario o a algo parecido.

Also consider that from March 2004 Google works by characters, not words. Hence in the preceding paragraph has written "characters" rather than words. Until March 2004 if your title was "wooden tables Office" and the user was looking "wooden tables", the value of your degree was 3/5 (in Castilian not seeped prepositions and considered words). Now is not the case. Now going by letters. Thus, if someone searches a derivative of a word or a plural, or conjugated verb, the page containing something similar is also included in search results.

Cuando descubras dónde tienen situadas las palabras, mira con qué densidad aparecen. En tu página web, haz que tengan mayor densidad que en las suyas. Lo puedes hacer incluyendo la palabra más veces o incluyendo menos palabras que no tengan que ver con esa búsqueda. La cuestión es elevar la densidad y superar la suya en cada una de las partes donde aparezca la palabra.

Eye do not go overboard ... Google penalizes pages with suspiciously high densities. You can get a 100% density in the title and URL, without anything happening. But a page where you put a word repeated 100 times, everywhere, bold and links, and do not include any other text, you can be assured that it will be expelled from Google. So moderation.

Also, think that your website has to be read by your users / customers ... it is essential that the text is aimed at them, not search engine effectiveness.

4. Find out how many pages have their web pages

The more pages you have indexed in Google, the more likely they are to participate in the struggle for certain words. There are also indications that Google puts in a better position to web containing a large number of pages where the search term is included.

So, on the one hand, it includes the words why you want to position yourself in the maximum possible pages. On the other hand, try your web have about 200 pages or more.

But once again, find out what your competitors do and include it in the table started to do at the beginning of this study.

To find out how many pages are indexed in Google, simply type in the search engine box:

website: www.nombredelaweb.com

(Eye not include a space between site: and the URL)

To find out how many indexed pages contain a particular word or string of words, simply type in the search engine box:

website: www.nombredelaweb.com "palabra the phrase"

This will give you the number of pages containing the phrase "word or phrase" on the website www.nombredelaweb.com

5. Check the number of links pointing to your pages

The PageRank algorithm that forms the (cultural note: PageRank means "Larry Page rank", not "page ranking"), is formed by many other algorithms and is quite complicated to understand. But there are some basic features that can easily be applied to your website.

PageRank influence in all, the number of links pointing to a website, the density of these links on the source page and the PageRank of the source page.

So this number 5 will focus on the first of the factors affecting PageRank: the number of links.

Again, note the number of pages that link to each of the 5 competing websites that are analyzed on your list.

To find the number of links to a page, simply type in the search engine box:

link:www.nombredelaweb.com

Since March 2004, Google gives less value to the links come from pages with similar IP to yours, so do not need to cheat: Google knows.

We wrote an article about Hilltop algorithm used by Google to calculate and filter the PageRank of the sites, a few months ago: HillTop

6. Analyze what kind of web linking to your competitors.

In all likelihood you can not include in your listing the PageRank of each page that link to those of your competitors, but it is important to see what kind of website they are, what PageRank have, how many other websites link and what words they use for linking to your competitors.

The higher the PageRank of a page that links to you, the greater the number of points you get for this link. So look for pages with high PR and link you get.

To conclude this point, do not forget that Google and other search engines, everything works by density, so if a web out 100 links to other websites, the value of a bond to come to you is 1/100 . So forget about link farms. Get links to your web pages with few links and a high PageRank.

7. Find out what words your competitors use the links to go to their websites

If the search word is part of a third party link to your website, you have a bonus in points (to put it in some way). So if you dedicate yourself to making wooden tables Office, ensures that the pages that link to yours using the phrase "wooden tables" to link you, instead of www.minombredeempresa.com

Obviously, you can not always control what words to use third-party websites to link to yours ... but when you can do it, remember this clause 7: Remember the bonus to take you if you get !!

8. Write down what pages have PageRank your competitors

Do not forget to include a column in your study indicating which have PageRank websites of your competitors. This will help you understand why they are in the top positions.

Remember to increase your PageRank must, above all, increase the number of pages that link to yours. So if you have a PageRank of less than 4, put to work to get links. If you have more than 4, it is quite difficult if you do not perform any upload, well designed and with a good strategy specific campaign for this purpose.

Articles written as a collaboration in the magazine "Mercados del Vino"

Hasta aquí, hemos descrito los 8 factores clave que te llevarán a ganar posiciones en Google. Pero cuando realizo este tipo de benchmark, suelo incluir tres columnas más en el listado. Se trata de la posición de nuestros competidores en el ranking de Alexa. No es que Alexa influya en Google, pero es bueno saber dónde están situados a nivel de visitas únicas, de páginas vistas por usuario y de ranking en general. Estos tres datos los encontrarás buscando a tus competidores en Alexa.com.

8 factors hope you have been helpful. This article is aimed to provide guidance to people who wish to know the exact position of your web pages, compared to those of its competitors. It is not intended to be manual in depth about how Google works.

To view the presentation we use when we do lectures about how search engines work, you can download it here: Diapostivas stitches

For more information about search engine: Free Search Engine Optimization Course

By the way, if you have questions or want to expand more specifically some point, we will be happy to assist you.

See how it works Google Scholar, Google's new search

At the end of last week Google put online the beta version Google Scholar, Its new search engine to locate technical information among all articles, studies, theses, white papers, case studies, Technical reports, research, documentation research centers and universities, books, etc ... published.

It has not even been a month since Google launched its Google Desktop Search tool that has pleasantly surprised us again throwing Google Scholar.

The initial image is very similar to Web search engine Google, however, we have introduced a search topic and give the button "search"We ordered a results window with no commercial information without ads appear. Sorting criteria take into account the content of the documents, the author, the publication in which the document appears, in a similar way as it does for the Web version with links-inThe number of citations to that article in other documents. Finally, it draws attention to the links shown are not unique, as the same article may be published in different media. Even it has links to documents referenced by studies (even they can not exist on the Web), similar to the concept links-out Web version.

To limit searches by author, allows us to include in the filter box seeker "author:" we can write alone or with the theme or concept we are looking to limit the number of results to be obtained.

Turning to practice, I have dedicated myself to making a few searches in the two versions of Google to see the real differences. The first search I've done is "eye tracking technology". The Web version has given me references and four ads 1,040,000. The first 10 results, 4 are companies that sell such technology-related solutions. The remaining 6 are studies or related technical information. In contrast, the results shown by 13,600 Google Scholar 100% are technical, not commercial references or advertisements appearing ... so perfect!

I have taken a second example, "web metrics". The results have been curiously very similar. Of seven advertisements and 2,050,000 results for the Web version, 40% were technical results and 60% commercial results. For version Google ScholarThey have been 28,000 the results, all technical again.

Finally, mention the authors whose technical documents have not yet been indexed they should ask their school, college or publisher to contact Google Scholar to include such content. Google Scholar moment does not allow direct publication by the author of reports and documents. More information is available in the FAQ.

Definitely a tool that will be talking in the research community from now.

A9 discover how it works: the final version of the browser created by Amazon

Amazon A9How it works A9, what kind of algorithms is based, why it is called A9, who devised it, and all that we learned about this new search engine that will have to medírselas with Yahoo, Google and new MSN that Microsoft has in beta. Lets go see it!

Entry A9 in the search engine market opens an interesting period in the war for monopolizing the user searches takes a new dimension.

With the new version Amazon hung yesterday, we will review again the topics covered in the functional analysis to find the differences, see if they have corrected the weaknesses that showed in April and discover the innovations that presents this search.

Tracing Service Customization:

When you enter A9 and a member of AmazonThe same cookie recognizes and greets you with a "Hello Montserrat"That leaves me flabbergasted and, by the impact, I can not help but make me think of a"Welcome professor Falken, ¿Would you like a game of chess?” :-)

A part of the name, show me the seeker box and the history of all my searches A9 for if I want to consult some previous results (and I do not know if they have programmed, but could also serve to know your choices in the search results that have offered you, so we can offer them better next time you are looking for something ... we'll find out with time)

About the database used by A9:

Definitely A9 uses the database of Google, instead of using Alexa (remember that Alexa was bought by Amazon in 2000 and that Alexa has scanned more websites that Google ... but Google kept clean its database and deletes each 6 months, the web pages that their spiders have been unable during that period of time ... Alexa does not)

In April analysis, commented that using the Google database but not used whole, but only a portion of it. Now we confirm that uses whole: by searching the type site: www.solocine.com get the same number of results (approx.), Both search engines.

About sorting algorithm A9

It is to Google, without hesitation.

Offers some variation in the order of the results, but I think it is because both Google and A9 apply filters to the results without you know it, that the algorithm itself. For example, according to the language Google layout you have, when you search results in Castilian, it offers different results ... even if you insist that you want no filters through ...

It's a shame they have not chosen to use its own algorithm and compete with Google searches to see who offers the highest quality. If they use the same database ... A9 had it very easy to use Alexa ranking PageRank instead of determining the relevance of a page and thus influence the sorting algorithm results. But it seems clear that it has chosen to ally itself to Google rather than compete against him.

About advertising on A9

The system uses Google Adwords and Google's sponsored links. It served directly from Google's own machines (you can see in the url redirection ads)

What is the value proposition A9? What differs from the rest?

Since we are seeing at the moment A9 is basically a Google with another look&feelLet's see how it differs:

  • A9 offers image search results while providing web search and even while looking at the texts of the books Amazon. It is a convenient feature that facilitates this page to find out if you are interested or not.
    Most of the functions of the web work with "drag & drop". It is the new trend in the usability of applications for the end user. Everything is dragged and placed where you want it to work or to be saved.
  • Favorites Tracing Service (Bookmarks): If you drag bookmarks to the URL of a web of appearing in a result, it is automatically saved here so you can consult it any other day.
  • It offers 4 skins and 3 different font sizes: If you want to see A9 in purple and suitable letters without glasses myopic, A9 allows it.
  • Offers "Site Info" from Alexa in its results: the results offered after a particular search are accompanied by a small icon "site info". This icon works like Alexa, activates a layer with information about the page (position in the ranking of Alexa, links to the page download speed, etc.)

No creo yo que Udi Manber esté muy satisfecho con el nuevo A9. Manber es un especialista en algoritmos, ex “jefe de algoritmos” de Amazon, ex “director científico” de Yahoo y ex profesor de informática de la Universidad de Arizona… no lo veo como alguien que se contente con sacar al mercado un Google con algunas cosillas retocadas en superficie… Desconozco por cuanto tiempo se ha cerrado el acuerdo con Google y si hay o no dinero por medio (a parte de los Adwords, que beneficia a ambos. Los Adwords de Google también están en Amazon).

Time will tell ... but I hope that A9 will end up being the chrysalis of something better awaits us in the near future ... or maybe die trying ... we'll see.

As a curiosity: Udi Manber is the man behind the name A9, which refers to 9 letters in the word algorithm in English (Algorithm).

By the way ... the A9 URL is www.a9.com if you want to play around and find the differences regarding Google :-)

What is the Hilltop algorithm?

Since March 2004, Google gives less value to the links come from pages with similar IP to yours, so do not need to cheat to change your PageRank and therefore improve your SEO: Google knows.

A filter this switch is called PageRank Hilltop algothim

Google has implemented this change in their algorithm to neutralize a trap that some experts in SEO webmasters have been doing since the PageRank became operational: to create endless small web, hosted on your own ISP, that link to your main website.Authority Pages

Also large corporations have abused the fact that a large number of inbound links you make to improve the positioning ... without going any further, SoloStocks we have links in the footer to all websites Intercom Group... and therefore on all pages Softonic (To quote one of our companies) there is also a link to SoloStocks. Since We have over 500,000 pages indexed in Google, my site receives 500,000 external links. Thing seems to me great ... but that is not 100% just from the point of view of an independent webmaster who runs a website with great content but never positioned above mine ... Google has implemented until Hilltop It has neutralized the effect of the links.

So I said ... do not need that to improve the SEO of your website, waste time including links to websites that are hosted on the same IP you ... because Google now looks at the IP from coming inbound links and has lowered the weight will greatly which have a similar IP to yours.

Effects of Google Dance September / October 2004

Unlike Google Dance Mar 2004, This September we all expected the new PageRank and cleaning the database with the consequent de-indexation of all pages that their spiders have been unable since last great cleansing, carried out in March ... but it has not happened .

Ok, ok ... let's start at the beginning ...

What is a Google Dance?

Are the changes that occur in the Google algorithm from time to time and cause the results that appear in the top positions change places and "dance".

September dancing fell short

In September 2004, Google has been limited to publish results as a month; without modifying the PR (at least outwardly, since we can not be sure that the PageRank displayed on the Google toolbar is really what Google uses to calculate the sorting algorithm results) and Google has only shown some variation in It results ... But October came, and with it, the new PageRank.

Since when Google PageRank not recalculated?

For half the PageRank of web pages in June had not been recalculated massively.

Specifically, according to rumors, they had not been recalculated since the algorithm Check Sum http://google.dirson.com/noticias.new/0569/ began running online.

Changes in the calculation of PageRank

We commented in an article in March that after the Florida Update, Google had included in the algorithm PageRank a filter to discriminate the websites of large corporations or the same owner, who conducted trade links with the sole purpose of raising your PageRank . This filter appears to remain active.

This filter is a complex algorithm itself, and we explained in the article:

What's Hilltop algorithm?

But let's see what Google has been doing these last 3 months:

25 August: big moves in the order of results

Moved first attributed to a Google Dance, but then, after a few weeks, the affected websites seen as recovering previous positions, so everything points to were tests in the algorithm.

September 23: new moves

They start running new results from all the material that Google spiders have collected until 30 August (except the homes of the websites that Google's updated every two or three days). begin serious doubts about whether the PageRank bar showing the PageRank that Google uses for its calculations ... and you think you have not updated data showing the bars, but if you use for your calculations.

7 October: begins assigning new PageRank

From October 7, some pages with zero PageRank, have begun to show PageRank in the Google bar. This've been able to confirm with the appearance of PageRank in the Google bar on pages that have been created during July, August and September, and so far showed us a zero.

Also on the site PageRank Watch, we can see some web that from that day, have the PageRank assigned or modified.

Some new features in Google searches

Searches in the pages of scanned books

We knew that after working with Amazon (A9, Amazon's search engine, the engine runs on Google), Google was able to look inside the books that Amazon is selling. Now, from Google it if you want to find results that appear within a book, you can make the following query:

book (+ whatever name, for example: book shakespeare)

This will show a first result with an icon indicating that it is a result that gives the words you want and appearing in a book. In fact, the search is done on http://print.google.com database of scanned books using Google.

The books belong to several online bookstores, not just Amazon.

In all likelihood, during this year we will discover more things about Google Dance September / October 2004 ...

Why eBay (and possibly Google) open source code to developers

Quick to "why" answer: so what is the logical evolution of a Save the Metcalfe a Red Group. If you are interested in topics ... read on.

Lets start by the beginning:

What is a Network Metcalfe?

A Save the Metcalfe is a business with a particular structure thus generating value from Metcalfe's Law.

(... yeah okay, thank you)

What is Metcalfe's Law?

Metcalfe's Law postulates that the value of a network increases as the square of the number of system users. This is not true 100% (after I explain why), but the truth is that this law is of great application to determine the expected growth for a business and creating value that is linked to this growth.

Consider this ...

The structure of traditional business

(Whether they are online as if you are offline)

In a traditional business, a provider offers a service or sell a product to a number "n" of customers. And the number of potential transactions to be performed in a given repeat customers without time, exactly equals the number of customers you have. (See Figure 1)

Traditional business

Image 1: Business with traditional structure

The value of this business depends on the number of customers who have (of course, there are more factors that influence business value, but in a hypothetical formula for calculating the value, our "n" is one of the variables ... and it is in this article we will analyze).

In a traditional business, the number of possible transactions grows linearly. That is: +1 customer equals +1 possible transaction.

The structure of business networking

Hay ciertos negocios, como el de una red telefónica, que no funcionan como un negocio tradicional. Todos los usuarios son oferta y demanda a la vez (emiten llamadas y reciben llamadas), así que el número de posibles transacciones es prácticamente n^2 (“n” al cuadrado) … y digo “prácticamente” porqué en realidad es n*(n-1) ya que la oferta son todos los usuarios, pero la demanda, son todos menos tú mismo… no se realizan llamadas sobre uno mismo. (En el resto del artículo, consideraremos que es n^2 para no complicar las fórmulas, pero por favor, téngase en consideración).

The first person to apply a law on this kind of behavior in a business was Robert Metcalfe (or so the legend goes), inventor of the Ethernet when Xerox invented the first laser printer in 1974 and did not know how to connect more than one computer while this printer. Metcalfe's Law It postulates that the value of a networked system grows at approximately the square of the number of users of the network itself. (See Figure 2)

Save the Metcalfe

Image 2: Business Network structure Metcalfe

Hence businesses with circular structures where all users can be supply and demand at the same time, they are called Metcalfe Networks.

El ejemplo más bello de Red de Metcalfe lo tenemos en eBay. Tiende a la perfección, concretamente, en su vertiente C2C (consumidor a consumidor). Cualquier usuario es un potencial vendedor y cualquier usuario es un potencial comprador. Su crecimiento es lo más cerca de n^2 que podemos encontrar en Internet.

Las webs de contactos que tanto están proliferando estos últimos años, también son claros ejemplos de bellas redes de Metcalfe perfectamente redondas. Y digo redondas, porqué hay redes cuya estructura no es redonda como la de la imagen 2. Son las redes donde claramente hay un lado con oferta y otro con demanda. En ese caso, el número de transacciones potenciales sigue siendo oferta*demanda, pero no tiende al cuadrado ya que son números distintos. Por supuesto, están elevadas a un coeficiente que es mayor que 1, que sería el del negocio tradicional; pero es menor que 2, que sería el de una Red de Metcalfe perfecta.

An example of this type of network would be SoloStocks.com not perfect, where companies sell their stocks, are not all potential buyers of stocks (except if all brokers ... then yes, you see the difference?). (See Figure 3)

Metcalfe red variant

Figure 3: Business with varying network structure Metcalfe

If the number of potential transactions in a perfect network Metcalfe 8 users would be 8 * (8-1) = 56, the number of possible transactions in a network noncircular 8 users (4 suppose that offer and 4 demanding) it would be 4 * 4 = 16 ie the coefficient network externality that market would be 1.33 instead of 2, and therefore value generation grow at lower speeds than those of a Save the Metcalfe perfect since it offers more value to its users, but higher than those of a traditional business that offers much less.

In a market like SoloStocks it is interesting to note that the greater the imbalance between the number of bidders users and demanding users, the lower the growth of the business and therefore always have to struggle to balance the number of bidders and plaintiffs .

Veamos un ejemplo de esto: supongamos una red de 8 usuarios donde 5 son oferta y 3 son demanda, el número de potenciales transacciones sería 3*5=15 y, por lo tanto, el coeficiente de externalidad de red 1,29. Si la desequilibramos más y tenemos 6 ofertantes y 2 demandantes el número máximo de transacciones es de 12. Y así…, hasta llegar al máximo desequilibrio posible, que sería el negocio tradicional (o cualquier comercio electrónico), donde hay 1 vendedor y 7 compradores (…para seguir con una red de 8 como teníamos antes); el número de potenciales transacciones sería 1*7=7 y por lo tanto el coeficiente de externalidad de red, sería 1, que es justamente lo que comentábamos cuando hemos explicado al principio la estructura de un negocio tradicional: el número de transacciones potenciales equivale al número de clientes.

Well ... if you have not giddy with both numbers and letters, and you read on, we will pass to the next level of evolution of a network and discover finally :-) why eBay and maybe Google (judging by the rumors by the network) open source code:

Las Redes Grupales o Group Forming Networks

When a Save the Metcalfe technology allows users to organize around common interests or goals, begin to appear small Metcalfe networks gravitating around a large network to which the feed. (See Figure 4)

Red Group

4: Business structure Red Group or Group Forming Network

For some time, eBay has been creating or participating in the creation of small sites specializing in certain types of auction and, therefore, has evolved into this model of network structure reminiscent of a daisy. Opening its code, leads to an infinite number of programmers begin to develop applications that revolve around the great network and eventually forming daisy petals.

With closed source, only they can create the petals. With open source have a chilling potential of possible petals to be created.

Growth Networks group tends to this formula: a ^ n, where "a" depends on the number of possible channels that can be opened, and where "n" is the number of users.

The theoretically perfect business would be one that consiguiese that for each user of the main network, possible to create a subnet. Its structure would be something like: n ^ n (I know of none and I'm sure it's just a theoretical model ... but the less, it is interesting to keep in mind).

Summarizing:

Traditional businessesGrow linearly as the number of clients (plaintiffs service) having: supply * 1 * suit or buyers

Businesses with network structure MetcalfeGrow almost square of the number of users who have: n * (n-1) or simplifying: n ^ 2

Businesses with Red Group structureGrow exponentially: a ^ n
Opening codes applications accelerate this growth, growing "a".

If you have not already bored and you're still reading, here's a gift:

What happens when you join several networks Metcalfe?

Business 1: m number of users

Business 2: n number of users

Sum of traditional business users 2: m + n
Sum of users 2 networks Metcalfe:
(m+n)^2=m^2+n^2+2mn –> synergy appears !!! 2mn

(... and yes ... this is the Binomial of Newton studied at school! ;-)

Growth Metcalfe two networks together is greater than the sum of both growth separately.

Is not it beautiful to see the word "synergy" represented mathematically?