Application - Layer Protocols

              We have just learned that network processes communicate with each other by sending messages into sockets. But how are these messages structured? What are the meanings of the various fields in the messages? When do the processes send the messages? These questions bring us into the realm of application-layer protocols. An application-layer protocol defines how an application's processes, running on different end systems, pass messages to each other. In particular, an application-layer protocol defines:

o The types of messages exchanged, for example, request messages and response messages
o The syntax of the various message types, such as the fields in the message and how the fields are delineated
o The semantics of the fields, that is, the meaning of the information in the fields
o Rules for determining when and how a process sends messages and responds to messages

Some application-layer protocols are specified in RFCs and are therefore in the public domain. For example, the Web's application-layer protocol, HTTP (the Hyper Text Transfer Protocol (RFC 2616]), is available as an REC. If a browser developer follows the rules of the HTTP RFC, the browser will be able to retrieve Web pages from any Web server that has also followed the rules of the HTTP RFC.

Many other application-layer protocols are proprietary and intentionally not available in the public domain. For example, many existing P2P file-sharing systems use proprietary application-layer protocols.

It is important to distinguish between network applications and application- layer protocols. An application-layer protocol is only one piece of a network application. Let's look at a couple of examples. The Web is a client-server application that allows users to obtain documents from Web servers on demand. The Web application consists of many components, including a standard for document formats (that is, HTML), Web browsers (for example, Firefox and Microsoft Internet Explorer), Web servers (for example, Apache and Microsoft servers), and an application-layer protocol. The Web's application-layer protocol, H'I' defines the format and sequence of the messages that are passed between browser and Web server. Thus, HTTP is only one piece (albeit, an important piece) of the Web application. As another example, an Internet e-mail application also has many components, including mail servers that house user mailboxes; mail readers that allow users to read and create messages; a standard for defining the structure of an e-mail message; and application-layer protocols that define how messages are passed between servers, how messages are passed between servers and mail readers, and how the contents of certain parts of the mail message (for example, a mail message header) are to be interpreted. The principal application-layer protocol for electronic mail is SMTP (Simple Mail Transfer Protocol) [RFC 2821]. Thus, e-mail's principal application-layer protocol, SMTP, is only one piece (albeit, an important piece) of the e-mail application.
Imran Rashid has working knowledge about windows operating systems. For more information visit:




The Advantage of Intrusion Detection System Based on Protocol Analysis

            There are many advantages, such as performance, efficiency, detection rate to false alarm rate, and etc. compare Intrusion Detection System (IDS) based on protocol analysis with which based on simple pattern-matching.

Let's take an example for HTTP - protocol combines with the HTTP analyzer of Ax3soft Sax2 intrusion detection system. "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" is a Unicode attack aim at IIS. The attack's first step is send out a request, like as "TCP dport:80", "content:' '/I", "alert:'IIS Unicode Directory Traversal' TCP dport:80", "content:'cmd.exe'/I" and "alert:'Attempt to execute cmd'" via browser. The Intrusion Detection System (IDS) based on simple pattern-matching will detect the attack with the rules in following: 1) system will send an alarm of "IIS Unicode Traversal" if the caught TCP packets sent to Point 80 with the code " "(blank space). 2) system will send an alarm of "Attempt to execute cmd" if the caught TCP packets sent to Point 80 with the code "cmd.exe". Put aside the optimization this feature, there are two serious bugs in this kind IDS - misinformation and omission. 

Misinformation The IDS based on pattern-matching ignores two important aspects whether the TCP connections have been setup and whether the matched string is legal. In fact, the later situation is more serious.

For example, the Cookie or GET/POST data can be include the code " " (blank space). Unfortunately, this method can not distinguish the blank space in these data.
 
Omission The IDS based on pattern-matching require the matched string must in one packet. The attackers, regard the rule, can execute their attacks with several packets not ONE packet.

For example, one byte composes one packet if the attacks transferred via Telnet. Therefore, pattern-matching method check the packet only will lead to serious omission of the attacks. Attackers can connect to the Point 80 via Telnet, input the "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" request in the Command-Line, and then double clicks the Enter key to send the attacks. With this method, the attacks may composed of many packets at most 64 packets.

In addition, the attackers can encode the "cmd.exe" to achieve his aim. For example, encode the URL request "cgi-bin" as "%63%67%69%2d%62%69%6e". In this situation, the rule is useless. 

The HTTP Analyzer in Ax3soft Sax2 intrusion detection system is designed for the two disadvantages of IDS based on simple pattern-matching. It has the following features:
The feature of TCP stream reconstruction based on protocol analysis engine in Ax3soft Sax2 intrusion detection system.
It will analyze and reconstruct the HTTP request in multiple packets. For example, the attacker encode the attacks "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0" in the following six packets "Get", "/scripts/.. ../winnt/system32/", "c", "m", "d" and ".exe?/c dir HTTP/1.0" can avoid the detection of pattern-matching IDS. Whereas, the TCP stream reconstruction will detect it and reconstruct the attacks to it's original profile "GET /scripts/.. ../winnt/system32/cmd.exe?/c dir HTTP/1.0".

Gains the whole HTTP request. Ax3soft Sax2 intrusion detection system will analyze the HTTP requests exceed normal length (maybe Buffer Overflow Attacks) even if the HTTP request composed of multiple TCP packets.

Analyzes HTTP 0.9, HTTP 1.0 and HTTP 1.1 protocols. Ax3soft Sax2 intrusion detection system will analyze and reconstruct multiple HTTP requests in a HTTP connection.
 Analyzes the HTTP requests send to Proxy Server.

Decodes the URL request. If the attackers encode the URL request to %XX, e.g. encode "cgi-bin" to "%63%67%69%2d%62%69%6e", to avoid attack detection. Ax3soft Sax2 intrusion detection system will detect it through it's own protocol analysis and decoding models.

Ax3soft Sax2 intrusion detection system will resolve the HTTP request into Method, Host, Path, Querystring and etc. and then analyze them. For the path include "cmd.exe", HTTP analyzer will gain the "cmd.exe" after decode and establish an event. And then, HTTP analyzer will send the event number and related information in uniform format to Response Module to start the next processing.
 
Above all, the HTTP Analyzer Module in Ax3soft Sax2 intrusion detection system, works as independent detection module, achieves more reliable and efficient detection of attacks via HTTP protocol through analyzes and decodes the HTTP request. Obviously, it will be the future development of Intrusion Detection System that based on protocol analysis in module method



SSL Certificates and HTTPS Explained

              During the course of daily transactions involving the use of computers most take for granted that the information sent and received is secure. The technology which allows the safe transfer of data can be attributed to the protocol known as SSL (Secure Socket Layer).

The SSL Certificates explained in laymen's terms defines the security protocol used to protect online communications. It's most common usage is to protect confidential data such as personal details or credit card information.

When a person initiates a transaction by clicking the submit button or entering data onto a web site the process begins to establish a secure connection. The browser then checks the SSL Certificate assuring it is valid and the web site is legitimate. Encryption of data ensues along with codes read by both the browser and web site. A human has the ability to encrypt data but the computer using SSL is indeed much faster and efficient. SSL Certificates contain the computer owner's public key, hence the technology in PKI (Public Key Infrastructure) that allows the sharing of encrypted data by SSL, TLS and HTTPS; thus SSL Certificates explained.

The HTTPS (Hyper Text Transfer Protocol Secure) which is displayed on the address bar of the PC giving a visual cue to validate that the site being visited has a secure connection; however the HTTPS certificate does not function as a stand-alone protocol, but relies on SSL Certificates to encrypt data on web sites that use this technology. The HTTPS certificate is a digital encryption tool that utilizes SSL protocol and is most often used by online banking, credit card payment web sites and those that rely heavily on a secure connection for their customers and the businesses themselves.

With the increasing number of computer hackers gaining access to sensitive documents and data around the world it has become imperative for banks, credit card issuers and business owners large and small to protect their interests. Most if not all businesses own or lease computer equipment to assist them with inventory, ordering and billing, payroll and a myriad of other applications. With the technology afforded by the use of SSL Certificates, HTTPS and encryption based communications allow merchants and those that conduct business online the assurance that their personal information is safeguarded. SSL, TLS and HTTPS work in synchronous effort to ensure a safe, secure environment for the Internet community.

Oliver is a passionate web security analyst who wishes to promote the correct ways to secure business websites. Oliver works for a company who offer https certificate under retail price, as well as extended validation SSL and Verisign SSL certificates in the UK. For more information please visit SSL 247 UK for SSL certificates explained.




The Limitations of HTTP With Anonymous Browsing

                HTTP is a set of rules requesting pages from a web server and transmitting pages (including text, graphic images, sound, video, and other multimedia files) to the requesting Web browser. HTTP uses TCP 'port 80' . HTTP proxies can be used by Internet users to "hide" their online identity by hiding their IP (Internet Protocol) address. This might be due to the fact that some employers tend to audit what their employees have been surfing during their office hours. Indirectly, increasing users have turned to HTTP proxies to protect their anonymity for their web-surfing habits.

HTTP proxies allow users to surf the web anonymously. The user can visit any website they wish under the disguise of an anonymous IP address. HTTP also have the added benefit of typically being compatible with any kind of browser the user wishes to utilize on.

On the negative end, HTTP proxies are not always secure as they are lack of security features and often left open. Open HTTP proxies can expose the user to even more risks In terms of piracy and frauds.

Apart from that, HTTP is a text based protocol, with all its communication pretty much readable, with no decoding, translation or decryption required. However, it is unbelievable to find that although it is advisable to encrypt to protect one's identities, HTTP, being one of the most common used mode of transport for personal information operates almost entirely in clear text. Guess that's because HTTP protocol was created without security in mind, but for a quick exchange of information as the key objective.

In fact HTTP can be implemented on any available TCP port but unfortunately it has become a norm to use port 80 as the standard port. Every web browser tries to connect on that port so every web server will listed on that port, there are of course always exceptions like SSL but it is guaranteed one of the first rules a firewall administrator will put in his rule base it's going to be 'allow TCP 80'.

For more information about computer forensics careers and online computer forensics degree, visit ComputerForensicsBasics.com.



How to Configure HTTP Endpoints in SQL Server

definition:

SQL Server 2005 provides native Hypertext Transfer Protocol(HTTP) support that allow us to create Web Service on the database server. These web service exposed Web Methods that the web application can access by endpoints, so that client can directly access these services over the internet.

This endpoints is the gateway through which http-based clients can send queries to the server, an http endpoints listen and receives client request on port 80.After establishing an http endpoints, you can create store procedures or user defined functions that can be made available to endpoints users.

The SQL Server instance provides a WSDL generator that helps generate the description of a web service in the WSDL format, which is used by the clients to send requests.

To Secure your data you can secure your endpoints by granting permissions to only selected users to access an HTTP endpoint.
How: 

  • Creation of required Database code
  • Creation of HTTP End Point Object
  • Finally verify the creation

Example(Sample Code)/Syntax:

Create ENDPOINT hr_WeeksNumber STATE = STARTED AS HTTP( /* PATH = 'url' where the endpoints will be stored on the host computer */ PATH='/HR', AUTHENTICATIoN =(INTEGRATED), Ports = (CLEAR), SITE = 'localhost') For SOAP ( WEBMETHOD 'pro_ListofWeeks' ( name = 'testdb.dbo.prolistofWeeks', FORMAT =ROWSETS_ONLY), /* RowSet = returns only the result sets. */ WSDL= DEFAULT, SCHEMA = STANDARD, DATABASE = 'TESTDB', NAMESPACE =' http://tempUri.org/ ');
-- Create Database object(Procedure) Create procedure [dbo].[pro_ListofWeeks] AS Declare @CWeek int select @CWeek =Datepart(wk,getdate()) --Print @CWeek set @CWeek=@CWeek-2 exec pro_Week @CWeek
-- Procedure Week--

CREATE procedure pro_Week @week integer as set nocount on create table #temp ( id integer) while (@week>=1)
begin insert into #temp
values (@week)
set @week=@week-1
end set nocount off
select ID as WeeksAvilable, 'Week: ' + Convert(varchar,id) as WeekNo from #temp order by 1 desc


Conclusion
To Verify the creation of endpoints you just need to create one client application, for an example if you create the client application using c#, you need to add web reference first After that you can access the newly created application.



Tips to Prevent Http Flood Attack on the Dedicated Server

There has been an increased amount of attacks and threats. Hackers try and attack your server by using HTTP flood tactic. There exists one single way to avoid such attacks, this is known as "Tarpitting". In this type of hacking attempt, the hacker usually tries and sends random HTTP request to a targeted server. This makes the server unstable, such attacks also cause the server to crash.

Usually it becomes difficult to handle HTTP flood attacks, the reason being the lack of way to identify legitimate packets from the ones which are sent by the hacker, hence it is difficult to tackle such situation. The servers TCP/IP stack isn't the only target, the prime target of HTTP flood ddos attack is the web server too, that runs on it thus resulting in more serious attack which is not easy to handle and your server may crash down resulting in its inaccessibility.

Though we say that it is difficult to tackle such attacks, but it isn't impossible. There exists a solution for handling such HTTP flood ddos attack.

There exists an advanced technique known as "Tarpitting" which is an efficient counter acting method to fight against such attacks. Incase you are using a Linux based server, you can enable Tarpitting using the below command:

iptables -A INPUT -s x. x. x. x -p tcp -j TARPIT.
Tarpitting automatically sets connections window size to few bytes once established successfully. According to TCP/IP protocol design, the connecting device will initially only send as much data to target as it takes to fill the window until the server responds. Just incase the connecting device does not receive responses, it will repeatedly start sending packets for longer period of time.

Here plays the actual role of " tarpitting ", it won't respond back to the packets, hence protecting your server from getting unwanted HTTP requests.

Inorder to avoid such attacks and threats, it is important to choose the Best Web Hosting services from a reputed hosting company. No matter if you have a small business web hosting package or a Dedicated server hosting account, if you are using the services of a reliable web hosting provider, their technical expertise can help you counter act such attacks with minimum waste of time resulting in the least loss to your server.

I am into the industry of Web hosting hence like to write articles on the various aspects of Web Hosting.The articles written are basically to help newbie webmasters to understand the various technologies of web hosting.One can have a practical overview of Best Web Hosting and Small Business WebHosting



How Does the HTTP Server Work?

                     The HTTP protocol is the most popular protocol in use in the TCP/IP arena. Every day billions of people use it in their internet sessions, when they surf the web.

In this article I am going to explain how this server works for those who need or want to understand this mechanism.
(The HTTP server needs to be installed in computers that hold html pages for the browsers to display).

The HTTP server opens a 'listening' socket for incoming connection to it. When a browser (the HTTP server's client) sends a request, it processes the request and sends back an answer. The browser request looks like this:

"GET /index.html HTTP/1.1
Host: qms.siptele.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.10) Firefox/3.6.10
Accept: image/png,image/*;q=0.8,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: "
The HTTP server looks for a file name "index.html" in the HTTP root directory and sends it back if exists or inform that the file does not exists - error code "404" as some of us noted.

The request contains several lines with importance:
The "Host" line tells the HTTP server whose host is in reference. This field allows the HTTP server to handle several hosts (or domains) in one server. It just picks up this value and turn to the proper root directory for this host.

The "User-Agent" tells the server which browser is in use. In our case it is "Firefox" browser. This field has no special importance, it just allows us to get statistics about the browsers percentage in use.

The accept fields inform the server about the browser capabilities. The server attempts to send back content that the browser can handle.

The "keep alive" tells the server that the browser wants to use the current socket for up to 115 times for requests/responses.

The "Referrer" field is the most important information for Internet marketers. It tells the server from which page the browser came from. This information is logged and informs us things like:
a. What search phrases where used in the search engine (like Google) to find us.
b. Which ad of ours gets clicked.
c. Which article/page points to our site generated this visit.

This information is priceless. It tells us how are marketing efforts doing. If we run ads in search engines for example we can know which ad is performing better then others, and focus on it.
The first HTTP servers where capable of locating files and sending them to the browsers. Later on the need to access databases arouse and brought to the creation of "CGI" (Common Gateway Interface) programs. The CGI is basically a native program that runs by the HTTP server in a special process environment, gets some request parameters from the server and processes it.

After the processing it returns the information to the HTTP server which sends it back to the browser.
Having a native program running on the server opens many options to the programmer. He can access and process information in databases, create dynamic behavior of the system and open whole new ways of system capabilities.
Opening the system also increased the vulnerability of the computer to hacking...

After several penetration incidents, a new restricting set of rules have been developed for the server. The server now has privileges of a restricted account and group, so it could only run in the predefine directories allocated to it, and not access the entire system. Having restricted account also ensures that an intruder gaining shell access to the server (after crashing the HTTP server) will not be able to see and utilize system information to gain control over the computer itself.

There was a demand for running a script language to ease the developing time. This demand was answered by company called "Zend" that developed a scripting language called "PHP" which stands for "Personal Home Page". When I say "scripting language" I mean a language that is interpreted line by line at execution time. Such languages take more time to parse and execute compatible to native programs (that just need to be run), but the rapid increase in computer performance makes it irrelevant.

PHP gained a huge user-base and is one of the top scripting languages in use today. To run it, the HTTP server needs to have a PHP interpreter to process it. When the HTTP server requested to handle a PHP program it run the PHP interpreter as a CGI program, and this interpreter gets the PHP script and processes it.

A new mechanism was invented to keep information in the browser, which are called "cookies". Cookies are short amount of information sent from the HTTP server and kept by the browser. The browser keeps this information and sends it every time it accesses the HTTP server. This information allows keeping state information for a long time. The information often contains username and session -id so people don't have to fill their username and password every time they access the HTTP server. This is how Gmail "remembers" the user and session that users have and allow them to open the proper Email page without asking for credentials every time.

Now days the HTTP server are very sophisticated. Web 2.0 allows sending many requests to the browser and get responses without the need to refresh the whole screen. This makes it easy to process information inside the page without affecting the whole page. This makes it easy and interactive to exchange information quickly in sites like Facebook.

I have explained here the operation and evolvement of the HTTP server. This description should give bird-eye overview about the way an HTTP server works and allow programmers to understand the reasons of creating things as they are.

Hello. My name is Michael. I am veteran computer programmer in the Internet telecom. I have developed SIP phone and proper iPBX for it. I am also Internet entrepreneur always on the lookup for a new ventures. My main business is SipTelecom.