Application - Layer Protocols

              We have just learned that network processes communicate with each other by sending messages into sockets. But how are these messages structured? What are the meanings of the various fields in the messages? When do the processes send the messages? These questions bring us into the realm of application-layer protocols. An application-layer protocol defines how an application's processes, running on different end systems, pass messages to each other. In particular, an application-layer protocol defines:

o The types of messages exchanged, for example, request messages and response messages
o The syntax of the various message types, such as the fields in the message and how the fields are delineated
o The semantics of the fields, that is, the meaning of the information in the fields
o Rules for determining when and how a process sends messages and responds to messages

Some application-layer protocols are specified in RFCs and are therefore in the public domain. For example, the Web's application-layer protocol, HTTP (the Hyper Text Transfer Protocol (RFC 2616]), is available as an REC. If a browser developer follows the rules of the HTTP RFC, the browser will be able to retrieve Web pages from any Web server that has also followed the rules of the HTTP RFC.

Many other application-layer protocols are proprietary and intentionally not available in the public domain. For example, many existing P2P file-sharing systems use proprietary application-layer protocols.

It is important to distinguish between network applications and application- layer protocols. An application-layer protocol is only one piece of a network application. Let's look at a couple of examples. The Web is a client-server application that allows users to obtain documents from Web servers on demand. The Web application consists of many components, including a standard for document formats (that is, HTML), Web browsers (for example, Firefox and Microsoft Internet Explorer), Web servers (for example, Apache and Microsoft servers), and an application-layer protocol. The Web's application-layer protocol, H'I' defines the format and sequence of the messages that are passed between browser and Web server. Thus, HTTP is only one piece (albeit, an important piece) of the Web application. As another example, an Internet e-mail application also has many components, including mail servers that house user mailboxes; mail readers that allow users to read and create messages; a standard for defining the structure of an e-mail message; and application-layer protocols that define how messages are passed between servers, how messages are passed between servers and mail readers, and how the contents of certain parts of the mail message (for example, a mail message header) are to be interpreted. The principal application-layer protocol for electronic mail is SMTP (Simple Mail Transfer Protocol) [RFC 2821]. Thus, e-mail's principal application-layer protocol, SMTP, is only one piece (albeit, an important piece) of the e-mail application.
Imran Rashid has working knowledge about windows operating systems. For more information visit:




The Advantage of Intrusion Detection System Based on Protocol Analysis

            There are many advantages, such as performance, efficiency, detection rate to false alarm rate, and etc. compare Intrusion Detection System (IDS) based on protocol analysis with which based on simple pattern-matching.

Let's take an example for HTTP - protocol combines with the HTTP analyzer of Ax3soft Sax2 intrusion detection system. "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" is a Unicode attack aim at IIS. The attack's first step is send out a request, like as "TCP dport:80", "content:' '/I", "alert:'IIS Unicode Directory Traversal' TCP dport:80", "content:'cmd.exe'/I" and "alert:'Attempt to execute cmd'" via browser. The Intrusion Detection System (IDS) based on simple pattern-matching will detect the attack with the rules in following: 1) system will send an alarm of "IIS Unicode Traversal" if the caught TCP packets sent to Point 80 with the code " "(blank space). 2) system will send an alarm of "Attempt to execute cmd" if the caught TCP packets sent to Point 80 with the code "cmd.exe". Put aside the optimization this feature, there are two serious bugs in this kind IDS - misinformation and omission. 

Misinformation The IDS based on pattern-matching ignores two important aspects whether the TCP connections have been setup and whether the matched string is legal. In fact, the later situation is more serious.

For example, the Cookie or GET/POST data can be include the code " " (blank space). Unfortunately, this method can not distinguish the blank space in these data.
 
Omission The IDS based on pattern-matching require the matched string must in one packet. The attackers, regard the rule, can execute their attacks with several packets not ONE packet.

For example, one byte composes one packet if the attacks transferred via Telnet. Therefore, pattern-matching method check the packet only will lead to serious omission of the attacks. Attackers can connect to the Point 80 via Telnet, input the "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" request in the Command-Line, and then double clicks the Enter key to send the attacks. With this method, the attacks may composed of many packets at most 64 packets.

In addition, the attackers can encode the "cmd.exe" to achieve his aim. For example, encode the URL request "cgi-bin" as "%63%67%69%2d%62%69%6e". In this situation, the rule is useless. 

The HTTP Analyzer in Ax3soft Sax2 intrusion detection system is designed for the two disadvantages of IDS based on simple pattern-matching. It has the following features:
The feature of TCP stream reconstruction based on protocol analysis engine in Ax3soft Sax2 intrusion detection system.
It will analyze and reconstruct the HTTP request in multiple packets. For example, the attacker encode the attacks "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" in the following six packets "Get", "/scripts/.. ../winnt/system32/", "c", "m", "d" and ".exe?/c dir HTTP/1.0" can avoid the detection of pattern-matching IDS. Whereas, the TCP stream reconstruction will detect it and reconstruct the attacks to it's original profile "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0".

Gains the whole HTTP request. Ax3soft Sax2 intrusion detection system will analyze the HTTP requests exceed normal length (maybe Buffer Overflow Attacks) even if the HTTP request composed of multiple TCP packets.

Analyzes HTTP 0.9, HTTP 1.0 and HTTP 1.1 protocols. Ax3soft Sax2 intrusion detection system will analyze and reconstruct multiple HTTP requests in a HTTP connection.
 Analyzes the HTTP requests send to Proxy Server.

Decodes the URL request. If the attackers encode the URL request to %XX, e.g. encode "cgi-bin" to "%63%67%69%2d%62%69%6e", to avoid attack detection. Ax3soft Sax2 intrusion detection system will detect it through it's own protocol analysis and decoding models.

Ax3soft Sax2 intrusion detection system will resolve the HTTP request into Method, Host, Path, Querystring and etc. and then analyze them. For the path include "cmd.exe", HTTP analyzer will gain the "cmd.exe" after decode and establish an event. And then, HTTP analyzer will send the event number and related information in uniform format to Response Module to start the next processing.
 
Above all, the HTTP Analyzer Module in Ax3soft Sax2 intrusion detection system, works as independent detection module, achieves more reliable and efficient detection of attacks via HTTP protocol through analyzes and decodes the HTTP request. Obviously, it will be the future development of Intrusion Detection System that based on protocol analysis in module method



SSL Certificates and HTTPS Explained

              During the course of daily transactions involving the use of computers most take for granted that the information sent and received is secure. The technology which allows the safe transfer of data can be attributed to the protocol known as SSL (Secure Socket Layer).

The SSL Certificates explained in laymen's terms defines the security protocol used to protect online communications. It's most common usage is to protect confidential data such as personal details or credit card information.

When a person initiates a transaction by clicking the submit button or entering data onto a web site the process begins to establish a secure connection. The browser then checks the SSL Certificate assuring it is valid and the web site is legitimate. Encryption of data ensues along with codes read by both the browser and web site. A human has the ability to encrypt data but the computer using SSL is indeed much faster and efficient. SSL Certificates contain the computer owner's public key, hence the technology in PKI (Public Key Infrastructure) that allows the sharing of encrypted data by SSL, TLS and HTTPS; thus SSL Certificates explained.

The HTTPS (Hyper Text Transfer Protocol Secure) which is displayed on the address bar of the PC giving a visual cue to validate that the site being visited has a secure connection; however the HTTPS certificate does not function as a stand-alone protocol, but relies on SSL Certificates to encrypt data on web sites that use this technology. The HTTPS certificate is a digital encryption tool that utilizes SSL protocol and is most often used by online banking, credit card payment web sites and those that rely heavily on a secure connection for their customers and the businesses themselves.

With the increasing number of computer hackers gaining access to sensitive documents and data around the world it has become imperative for banks, credit card issuers and business owners large and small to protect their interests. Most if not all businesses own or lease computer equipment to assist them with inventory, ordering and billing, payroll and a myriad of other applications. With the technology afforded by the use of SSL Certificates, HTTPS and encryption based communications allow merchants and those that conduct business online the assurance that their personal information is safeguarded. SSL, TLS and HTTPS work in synchronous effort to ensure a safe, secure environment for the Internet community.

Oliver is a passionate web security analyst who wishes to promote the correct ways to secure business websites. Oliver works for a company who offer https certificate under retail price, as well as extended validation SSL and Verisign SSL certificates in the UK. For more information please visit SSL 247 UK for SSL certificates explained.




The Limitations of HTTP With Anonymous Browsing

                HTTP is a set of rules requesting pages from a web server and transmitting pages (including text, graphic images, sound, video, and other multimedia files) to the requesting Web browser. HTTP uses TCP 'port 80' . HTTP proxies can be used by Internet users to "hide" their online identity by hiding their IP (Internet Protocol) address. This might be due to the fact that some employers tend to audit what their employees have been surfing during their office hours. Indirectly, increasing users have turned to HTTP proxies to protect their anonymity for their web-surfing habits.

HTTP proxies allow users to surf the web anonymously. The user can visit any website they wish under the disguise of an anonymous IP address. HTTP also have the added benefit of typically being compatible with any kind of browser the user wishes to utilize on.

On the negative end, HTTP proxies are not always secure as they are lack of security features and often left open. Open HTTP proxies can expose the user to even more risks In terms of piracy and frauds.

Apart from that, HTTP is a text based protocol, with all its communication pretty much readable, with no decoding, translation or decryption required. However, it is unbelievable to find that although it is advisable to encrypt to protect one's identities, HTTP, being one of the most common used mode of transport for personal information operates almost entirely in clear text. Guess that's because HTTP protocol was created without security in mind, but for a quick exchange of information as the key objective.

In fact HTTP can be implemented on any available TCP port but unfortunately it has become a norm to use port 80 as the standard port. Every web browser tries to connect on that port so every web server will listed on that port, there are of course always exceptions like SSL but it is guaranteed one of the first rules a firewall administrator will put in his rule base it's going to be 'allow TCP 80'.

For more information about computer forensics careers and online computer forensics degree, visit ComputerForensicsBasics.com.



How to Configure HTTP Endpoints in SQL Server

definition:

SQL Server 2005 provides native Hypertext Transfer Protocol(HTTP) support that allow us to create Web Service on the database server. These web service exposed Web Methods that the web application can access by endpoints, so that client can directly access these services over the internet.

This endpoints is the gateway through which http-based clients can send queries to the server, an http endpoints listen and receives client request on port 80.After establishing an http endpoints, you can create store procedures or user defined functions that can be made available to endpoints users.

The SQL Server instance provides a WSDL generator that helps generate the description of a web service in the WSDL format, which is used by the clients to send requests.

To Secure your data you can secure your endpoints by granting permissions to only selected users to access an HTTP endpoint.
How: 

  • Creation of required Database code
  • Creation of HTTP End Point Object
  • Finally verify the creation

Example(Sample Code)/Syntax:

Create ENDPOINT hr_WeeksNumber STATE = STARTED AS HTTP( /* PATH = 'url' where the endpoints will be stored on the host computer */ PATH='/HR', AUTHENTICATIoN =(INTEGRATED), Ports = (CLEAR), SITE = 'localhost') For SOAP ( WEBMETHOD 'pro_ListofWeeks' ( name = 'testdb.dbo.prolistofWeeks', FORMAT =ROWSETS_ONLY), /* RowSet = returns only the result sets. */ WSDL= DEFAULT, SCHEMA = STANDARD, DATABASE = 'TESTDB', NAMESPACE =' http://tempUri.org/ ');
-- Create Database object(Procedure) Create procedure [dbo].[pro_ListofWeeks] AS Declare @CWeek int select @CWeek =Datepart(wk,getdate()) --Print @CWeek set @CWeek=@CWeek-2 exec pro_Week @CWeek
-- Procedure Week--

CREATE procedure pro_Week @week integer as set nocount on create table #temp ( id integer) while (@week>=1)
begin insert into #temp
values (@week)
set @week=@week-1
end set nocount off
select ID as WeeksAvilable, 'Week: ' + Convert(varchar,id) as WeekNo from #temp order by 1 desc


Conclusion
To Verify the creation of endpoints you just need to create one client application, for an example if you create the client application using c#, you need to add web reference first After that you can access the newly created application.



Tips to Prevent Http Flood Attack on the Dedicated Server

There has been an increased amount of attacks and threats. Hackers try and attack your server by using HTTP flood tactic. There exists one single way to avoid such attacks, this is known as "Tarpitting". In this type of hacking attempt, the hacker usually tries and sends random HTTP request to a targeted server. This makes the server unstable, such attacks also cause the server to crash.

Usually it becomes difficult to handle HTTP flood attacks, the reason being the lack of way to identify legitimate packets from the ones which are sent by the hacker, hence it is difficult to tackle such situation. The servers TCP/IP stack isn't the only target, the prime target of HTTP flood ddos attack is the web server too, that runs on it thus resulting in more serious attack which is not easy to handle and your server may crash down resulting in its inaccessibility.

Though we say that it is difficult to tackle such attacks, but it isn't impossible. There exists a solution for handling such HTTP flood ddos attack.

There exists an advanced technique known as "Tarpitting" which is an efficient counter acting method to fight against such attacks. Incase you are using a Linux based server, you can enable Tarpitting using the below command:

iptables -A INPUT -s x. x. x. x -p tcp -j TARPIT.
Tarpitting automatically sets connections window size to few bytes once established successfully. According to TCP/IP protocol design, the connecting device will initially only send as much data to target as it takes to fill the window until the server responds. Just incase the connecting device does not receive responses, it will repeatedly start sending packets for longer period of time.

Here plays the actual role of " tarpitting ", it won't respond back to the packets, hence protecting your server from getting unwanted HTTP requests.

Inorder to avoid such attacks and threats, it is important to choose the Best Web Hosting services from a reputed hosting company. No matter if you have a small business web hosting package or a Dedicated server hosting account, if you are using the services of a reliable web hosting provider, their technical expertise can help you counter act such attacks with minimum waste of time resulting in the least loss to your server.

I am into the industry of Web hosting hence like to write articles on the various aspects of Web Hosting.The articles written are basically to help newbie webmasters to understand the various technologies of web hosting.One can have a practical overview of Best Web Hosting and Small Business WebHosting



How Does the HTTP Server Work?

                     The HTTP protocol is the most popular protocol in use in the TCP/IP arena. Every day billions of people use it in their internet sessions, when they surf the web.

In this article I am going to explain how this server works for those who need or want to understand this mechanism.
(The HTTP server needs to be installed in computers that hold html pages for the browsers to display).

The HTTP server opens a 'listening' socket for incoming connection to it. When a browser (the HTTP server's client) sends a request, it processes the request and sends back an answer. The browser request looks like this:

"GET /index.html HTTP/1.1
Host: qms.siptele.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.10) Firefox/3.6.10
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: "
The HTTP server looks for a file name "index.html" in the HTTP root directory and sends it back if exists or inform that the file does not exists - error code "404" as some of us noted.

The request contains several lines with importance:
The "Host" line tells the HTTP server whose host is in reference. This field allows the HTTP server to handle several hosts (or domains) in one server. It just picks up this value and turn to the proper root directory for this host.

The "User-Agent" tells the server which browser is in use. In our case it is "Firefox" browser. This field has no special importance, it just allows us to get statistics about the browsers percentage in use.

The accept fields inform the server about the browser capabilities. The server attempts to send back content that the browser can handle.

The "keep alive" tells the server that the browser wants to use the current socket for up to 115 times for requests/responses.

The "Referrer" field is the most important information for Internet marketers. It tells the server from which page the browser came from. This information is logged and informs us things like:
a. What search phrases where used in the search engine (like Google) to find us.
b. Which ad of ours gets clicked.
c. Which article/page points to our site generated this visit.

This information is priceless. It tells us how are marketing efforts doing. If we run ads in search engines for example we can know which ad is performing better then others, and focus on it.
The first HTTP servers where capable of locating files and sending them to the browsers. Later on the need to access databases arouse and brought to the creation of "CGI" (Common Gateway Interface) programs. The CGI is basically a native program that runs by the HTTP server in a special process environment, gets some request parameters from the server and processes it.

After the processing it returns the information to the HTTP server which sends it back to the browser.
Having a native program running on the server opens many options to the programmer. He can access and process information in databases, create dynamic behavior of the system and open whole new ways of system capabilities.
Opening the system also increased the vulnerability of the computer to hacking...

After several penetration incidents, a new restricting set of rules have been developed for the server. The server now has privileges of a restricted account and group, so it could only run in the predefine directories allocated to it, and not access the entire system. Having restricted account also ensures that an intruder gaining shell access to the server (after crashing the HTTP server) will not be able to see and utilize system information to gain control over the computer itself.

There was a demand for running a script language to ease the developing time. This demand was answered by company called "Zend" that developed a scripting language called "PHP" which stands for "Personal Home Page". When I say "scripting language" I mean a language that is interpreted line by line at execution time. Such languages take more time to parse and execute compatible to native programs (that just need to be run), but the rapid increase in computer performance makes it irrelevant.

PHP gained a huge user-base and is one of the top scripting languages in use today. To run it, the HTTP server needs to have a PHP interpreter to process it. When the HTTP server requested to handle a PHP program it run the PHP interpreter as a CGI program, and this interpreter gets the PHP script and processes it.

A new mechanism was invented to keep information in the browser, which are called "cookies". Cookies are short amount of information sent from the HTTP server and kept by the browser. The browser keeps this information and sends it every time it accesses the HTTP server. This information allows keeping state information for a long time. The information often contains username and session -id so people don't have to fill their username and password every time they access the HTTP server. This is how Gmail "remembers" the user and session that users have and allow them to open the proper Email page without asking for credentials every time.

Now days the HTTP server are very sophisticated. Web 2.0 allows sending many requests to the browser and get responses without the need to refresh the whole screen. This makes it easy to process information inside the page without affecting the whole page. This makes it easy and interactive to exchange information quickly in sites like Facebook.

I have explained here the operation and evolvement of the HTTP server. This description should give bird-eye overview about the way an HTTP server works and allow programmers to understand the reasons of creating things as they are.

Hello. My name is Michael. I am veteran computer programmer in the Internet telecom. I have developed SIP phone and proper iPBX for it. I am also Internet entrepreneur always on the lookup for a new ventures. My main business is SipTelecom.




HTTP - Hypertext Transfer Protocol

            HTTP is one of the most successful and widely used protocols on the Internet today. It is application-layer protocol used to transmit and receive hypertext pages. HTTP allows a client usually a web browser to send a simple request and receive response back from the server. Whenever you write a URL in address bar of you browser, your browser firstly contacts the web server, web server locates the requested page and sends the appropriate response. These requests and responses are issued in HTTP.
Each HTTP cycle has following steps:
Connection
The connection is established between a web browser and a web server. The connection is established via TCP/IP protocols over particular port generally port 80 is used. However, HTTP is not used to establish connection, it only defines the rules that specify how they communicate.
Request
The web browser sends a request to server, specifying the resource to retrieve. HTTP defines the set of rules for sending the request. Every HTTP request consists of Request-Line, Request-Headers and Message-Body. Sample HTTP request is shown below.
GET /index.htm HTTP/1.1
Accept: text/html, text/plain
User-Agent: Mozilla/4.0
Each HTTP request has request line and consists of request methods, URI, and HTTP version. After Request-Line, Request-Header starts and providing the characteristics associated with data returned.
Response
It is the response send by the web server to client. The server firstly locates the requested document and sends the appropriate response. However there is a format specified by HTTP to send the response from server. Every HTTP response consists of Status-Line, Response-Headers and Message-Body. Sample HTTP response is shown below.
HTTP/1.1 200 OK
Server: Apache/1.3.3.7
Date: Mon, 23 May 2005 22:38:34 GMT
Accept-Ranges: bytes
Content-Type: text/html
Content-Length: 512
Last-Modified: Tue, 18 Jan 2007 10:12:30 GMT
Connection: close
<title>hello world</title><meta http-equiv="Content-type" content="text/html; charset=ISO-8859-1">Hello world
The first line of the every HTTP response is called the Status-Line and consists of numeric status code returned along with reason phrase. It is the response returned associated with the HTTP request. After Status-Line, Response-Header starts and providing the characteristics associated with data returned.
Close
Finally connection is closed. After each request and response cycle the connection is closed. Each time the web browser makes request, new connection is established. There is no account for the previous requested resource on web server or I can say that there is no session maintained. This makes HTTP a stateless protocol.



The Underlying Protocols of the Internet

           As development work of the wide area networking was going on in the early 1970s leading to the emergence of the internet, the TCP/IP protocol was also developed. TCP stands for Transmission Control Protocol, while IP stands for Internet Protocol. The adoption of the TCP/IP protocols as an internet protocol led to the integration of networks into one big network that has rapidly grown hitting a mark of approximately 2,267 billion users as at the end of Dec 2011 (Internet World Stats). Today we have many application service protocols co-existing with TCP/IP as the underlying protocol.
TCP/IP is a transport protocol. It can be used to support applications directly or other protocols can be layered on TCP/IP to provide additional features. These protocols include:
  • HTTP (Hypertext Transfer Protocol) - Used by web browsers and web servers to exchange information. On the other hand when a secure connection is required, SSL (Secure Socket Layer) protocol or its successor protocol Transport Layer Security (TLS), which use encryption are used to create a secure connection through the web browser but this time instead of HTTP it uses HTTPS.

  • SMTP (Simple Mail Transfer Protocol) - Used to send and receive email over the TCP/IP protocol. Due to its limitation in message queuing it is normally used with other protocols like POP3 or IMAP.

  • TELNET (Telecommunication Network) - Used to connect to remote hosts via a telnet client. This results in making your computer a virtual machine while you work on the remote computer as if it were on your desktop.

  • FTP (File Transfer Protocol) - Used to transfer files from one host to another using FTP client software over a TCP/IP network.

  • NNTP (Network News Transfer Protocol ) - Used to transport news articles between news servers.
TCP (Transport Control Protocol) and UDP (User Datagram Protocol) are both internet protocols used for transport of data. IP (Internet Protocol) works as the underlying protocol of the internet virtual network. It sits beneath the UDP and TCP protocols. IP datagram provide the basic transmission mechanisms for all TCP/IP networks. This includes the internet, ATM, local area networks such as Ethernet, and token ring networks. TCP is reliable and is connection oriented. It establishes the connection first before transmitting the data and the data can flow in either direction. UDP is a datagram protocol with limited capabilities. It has no guarantee of the arrival of the message on the other end. The datagram packets get to their destination in any order and will need to be reassembled. At times UDP is preferred over TCP where there is small amounts of data to transmit therefore the amount of received data at the destination does not take up much time to reassemble causing it to be faster. UDP is also a preferred choice in sending packets of data which need no response. It also provides a checksum capability to ensure all the data has arrived.
Application protocols sit above the two building blocks of the internet protocols; namely UDP and TCP. These two protocols have a unique tradeoff. UDP provides a simple message relaying protocol that has omission failures but has minimal costs due to the fact that there need not be accountability for message relay failure. This protocol is often used for broadcasting; like in video streaming. TCP has guaranteed message delivery, but at the expense of additional messages with much higher latency and storage costs.




The History and Future of Hyper Text Transfer Protocol

                  Hyper Text Transfer Protocol, or HTTP, is an application layer network protocol that provides a standard for communication on the Internet. Essentially, HTTP is a language that web browsers use to request information, such as a web page, from the web server on which the document is stored. Because the web browser and web server speak the same language (HTTP), the server is able to send the browser the various files (text, graphics, sounds, etc.) requested by its user. While the Hyper Text Transfer Protocol is just one of the ten scheme names (a name for the manner in which a browser accesses a resource), it is far and away the most frequently used. In fact, HTTP has become so ubiquitous that most web browsers no longer require users to enter it as part of web addresses; the majority of browsers automatically assume its presence is required.
Whether business owners realize it or not, HTTP plays a major role in the success of their companies, because Hyper Text Transfer Protocol is a standard that ensures a customer's web browser will be able to successfully communicate with their organization's web server. Communications between web browsers and web servers are very similar to two people attempting to have a conversation, because both of these exchanges require a single language that each party can speak and understand. Without a standard language like the Hyper Text Transfer Protocol, a web server would be like a person who is fluent in English but does not understand Spanish, and the web browser would be like an individual who is fluent in Spanish but does not understand English. Regardless of how articulate, intelligent, or interesting either of the individuals is, or how many times they ask a question or make a statement, the two speakers will not be able to exchange information. However, if both speakers are fluent in French, or the server and browser both "speak" HTTP, then they will be able to successfully share ideas and information.
Thanks to HTTP, visitors to a company's web site are able to retrieve contact information, browse and purchase merchandise, or learn more about the different services a business has to offer. Without Hyper Text Transfer Protocol, or a similar standardized form of communication, a user's web browser might make requests for files in a language that a company's server simply doesn't understand. This lack of standardization results in potential customers being unable to view certain web pages; if a consumer can't access a website, then they won't be able to purchase anything from the site.
The public has grown so accustomed to being able to access data on just about any website (with the exception of password protected information), that it is difficult to imagine an Internet where some sites could only be accessed if a user employed a particular browser. Some users would probably find oscillating between different browsers tedious, and they might avoid going to certain sites if they required them to use a browse they simply did not like. Ultimately, no matter how good a company's products or services are is irrelevant if the public can't learn about them. Without HTTP, the Internet as the world currently knows it, and all of the conveniences it has to offer, would cease to exist, and all of the various businesses, organizations, and individuals who rely upon it as a means of earning money, disseminating information, purchasing goods and services, or communicating with one another would have no choice but to find alternative ways of doing so.
While HTTP is today's protocol of choice--and it's hard to imagine what the Internet would be like without it-- the Hyper Text Transfer Protocol was not always the standard protocol used for Internet communications. In order to fully understand the standardization process of HTTP, it is necessary to review the various protocols and computer communication networks that preceded the Hyper Text Transfer Protocol.
Initially, ARPANET (Advanced Research Projects Agency Network), a computer network developed by the United States Department of Defense, and the predecessor to the Internet, used the 1822 protocol to communicate information between hosts. A message sent using this protocol was composed of a message type, host address, and data field. While the 1822 protocol eventually proved to be an inadequate means of managing various connections between different applications on a single host, it is important to remember that it played an integral role in the development of the Internet by laying the groundwork for future protocols to come.
NCP (Network Control Program) replaced the 1822 protocol as ARPANET's chosen protocol, because NCP was able to do something 1822 could not: offer a standardized and dependable means of two-way, flow-controlled communication between different processes residing on different hosts. While NCP was an improvement upon the 1822 protocol, its reign as ARPANET's standard protocol ended in 1983 when it was replaced by TCP/IP (Transmission Control Protocol/Internet Protocol).
TCP/IP was chosen as the official standard because it was a relatively inexpensive, simple, and easy to use protocol. However, like the protocols which came before it, TCP/IP would soon be replaced by another protocol-HTTP. HTTP was first developed in 1991 by Tim Berners-Lee to meet needs unique to the Internet, such as forwarding a user's request to another server or performing index searches. Prior to the development of HTTP, usage of the Internet was nowhere near as widespread as it is today, perhaps due in part to the lack of one official protocol for communication between networked computers. The first version of HTTP was known as HTTP/0.9, and its main purpose was to transfer raw data from one machine to another. The second incarnation, HTTP/1.0, was released in 1996 as an improvement upon the original because it allowed messages to be in MIME-like formats (contain information about the data transferred such as the time and date of the transfer). While HTTP/1.0 was definitely an improvement upon HTTP/0.9, it still did not allow for certain tasks such as persistent connections or virtual hosting. Consequently, HTTP/1.1 was released in 1997 and continues to be the version of the protocol currently used today.
If history is any indication, HTTP/1.1 will probably not remain the standard protocol for very long. In fact, HTTP/1.2 was published in 2000, but it has yet to replace its predecessor as the version du jour. Only time will tell if HTTP/1.2 will ever become the standard protocol-- perhaps a different protocol all together will eventually take HTTP/1.1's place.

Article Source: http://EzineArticles.com/?expert=Caitlin_McAuliffe