How to Better Secure Your Web Server


How to Secure Better Your Web Server
The World Wide Web is one of the most important ways for an organization to publish information, interact with Internet users, and establish an e-commerce/e-government presence. However, if an organization is not rigorous in configuring and operating its public Web site, it may be vulnerable to a variety of security threats.

Although the threats in cyberspace remain largely the same as in the physical world (e.g., fraud, theft, vandalism, and terrorism), they are far more dangerous as a result of three important developments: increased efficiency, action at a distance, and rapid technique propagation.

Increased Efficiency - Automation makes attacks, even those with minimal opportunity for success, efficient and extremely profitable. For example, in the physical world, an attack that would succeed once in 10,000 attempts would be ineffectual because of the time and effort required, on average, for a single success. The time invested in achieving a single success would be outweighed by the time invested in the 9,999 failures. On the Internet, automation enables the same attack to be a stunning success.

Computing power and bandwidth are becoming less expensive daily, while the number of hosts that can be targeted is growing rapidly. This synergy means that almost any attack, no matter how low its success rate, will likely find many systems to exploit.

Action at a Distance - The Internet allows action at a distance. The Internet has no borders, and every point on the Internet is potentially reachable from every other point. This means that an attacker in one country can target a remote Web site in another country as easily as one close to home.

Rapid Technique Propagation - The Internet allows for easier and more rapid technique propagation. Before the Internet, techniques for attack were developed that would take years, if ever, to propagate, allowing time to develop effective countermeasures. Today, a new technique can be propagated within hours or days. It is now more difficult to develop effective countermeasures in a timely manner.

Compromised Web sites have served as an entry point for intrusions into many organizations’ internal networks. Organizations can face monetary losses, damage to reputation, or legal action if an intruder successfully violates the confidentiality of their data.

In addition, an organization can find itself in an embarrassing situation resulting from malicious intruders changing the content of the organization’s Web pages.

Denial of service (DoS) attacks can make it difficult, if not impossible, for users to access an organization’s Web site.

These attacks may cost the organization significant time and money. DoS attacks are easy for attackers to attempt because of the number of possible attack vectors, the variety of automated tools available, and the low skill level needed to use the tools. DoS attacks, as well as threats of initiating DoS attacks, are also increasingly being used to blackmail organizations.

Many DoS attacks are a result of botnets, a group of computers with a program surreptitiously installed to cause them to attack other systems. Botnets are often composed primarily of poorly secured home computers that have high-speed Internet connectivity. Botnets can be used to perform distributed denial of service (DDoS) attacks, which are much harder to defend against because of the large number of attacking hosts.

Kossakowski and Allen identified three main security issues related to the operation of a publicly accessible Web site :

  • Misconfiguration or other improper operation of the Web server, which may result, for example, in the disclosure or alteration of proprietary or sensitive information. This information can include items such as the following:
  • Assets of the organization
  • Configuration of the server or network that could be exploited for subsequent attacks
  • Information regarding the users or administrator(s) of the Web server, including their passwords.
  • Vulnerabilities within the Web server that might allow, for example, attackers to compromise the security of the server and other hosts on the organization’s network by taking actions such as the following:
  • Defacing the Web site or otherwise affect information integrity
  • Executing unauthorized commands or programs on the host operating system, including ones that the intruder has installed
  • Gaining unauthorized access to resources elsewhere in the organization’s computer network Launching attacks on external sites from the Web server, thus concealing the intruders’ identities, and perhaps making the organization liable for damages
  • Using the server as a distribution point for illegally copied software, attack tools, or pornography, perhaps making the organization liable for damages
  • Using the server to deliver attacks against vulnerable Web clients to compromise them.
  • Inadequate or unavailable defense mechanisms for the Web server to prevent certain classes of attacks, such as DoS attacks, which disrupt the availability of the Web server and prevent authorized users from accessing the Web site when required.

In recent years, as the security of networks and server installations have improved, poorly written software applications and scripts that allow attackers to compromise the security of the Web server or collect data from backend databases have become the targets of attacks.

Many dynamic Web applications do not perform sufficient validation of user input, allowing attackers to submit commands that are run on the server.

Common examples of this form of attack are structured query language (SQL) injection, where an attacker submits input that will be passed to a database and processed, and cross-site scripting, where an attacker manipulates the application to store scripting language commands that are activated when another user accesses the Web page.

A number of steps are required to ensure the security of any public Web server. As a prerequisite for taking any step, however, it is essential that the organization have a security policy in place.

Taking the following steps within the context of the organization’s security policy should prove effective:

  • Step 1: Installing, configuring, and securing the underlying operating system (OS)
  • Step 2: Installing, configuring, and securing Web server software
  • Step 3: Employing appropriate network protection mechanisms (e.g., firewall, packet filtering router, and proxy)
  • Step 4: Ensuring that any applications developed specifically for the Web server are coded following secure programming practices
  • Step 5: Maintaining the secure configuration through application of appropriate patches and upgrades, security testing, monitoring of logs, and backups of data and OS
  • Step 6: Using, publicizing, and protecting information and data in a careful and systemic manner
  • Step 7: Employing secure administration and maintenance processes (including server/application updating and log reviews)
  • Step 8: Conducting initial and periodic vulnerability scans of each public Web server and supporting network infrastructure (e.g., firewalls, routers).

These practices are designed to help mitigate the risks associated with public Web servers.

When addressing Web server security issues, it is an excellent idea to keep in mind the following general information security principles:

  • Simplicity - Security mechanisms (and information systems in general) should be as simple as possible. Complexity is at the root of many security issues.
  • Fail-Safe - If a failure occurs, the system should fail in a secure manner, i.e., security controls and settings remain in effect and are enforced. It is usually better to lose functionality rather than security.
  • Complete Mediation - Rather than providing direct access to information, mediators that enforce access policy should be employed. Common examples of mediators include file system permissions, proxies, firewalls, and mail gateways.
  • Open Design - System security should not depend on the secrecy of the implementation or its components. “Security through obscurity” is not reliable.
  • Separation of Privilege - Functions, to the degree possible, should be separate and provide as much granularity as possible. The concept can apply to both systems and operators and users. In the case of systems, functions such as read, edit, write, and execute should be separate. In the case of system operators and users, roles should be as separate as possible. For example, if resources allow, the role of system administrator should be separate from that of the security administrator.
  • Least Privilege - This principle dictates that each task, process, or user is granted the minimum rights required to perform its job. By applying this principle consistently, if a task, process, or user is compromised, the scope of damage is constrained to the limited resources available to the compromised entity.
  • Psychological Acceptability - Users should understand the necessity of security. This can be provided through training and education. In addition, the security mechanisms in place should present users with sensible options that give them the usability they require on a daily basis. If users find the security mechanisms too cumbersome, they may devise ways to work around or compromise them. The objective is not to weaken security so it is understandable and acceptable, but to train and educate users and to design security mechanisms and policies that are usable and effective.
  • Least Common Mechanism - When providing a feature for the system, it is best to have a single process or service gain some function without granting that same function to other parts of the system. The ability for the Web server process to access a back-end database, for instance, should not also enable other applications on the system to access the back-end database.
  • Defense-in-Depth - Organizations should understand that a single security mechanism is generally insufficient. Security mechanisms (defenses) need to be layered so that compromise of a single security mechanism is insufficient to compromise a host or network. No “silver bullet” exists for information system security.
  • Work Factor - Organizations should understand what it would take to break the system or network’s security features. The amount of work necessary for an attacker to break the system or network should exceed the value that the attacker would gain from a successful compromise.
  • Compromise Recording —Records and logs should be maintained so that if a compromise does occur, evidence of the attack is available to the organization. This information can assist in securing the network and host after the compromise and aid in identifying the methods and exploits used by the attacker. This information can be used to better secure the host or network in the future. In addition, these records and logs can assist organizations in identifying and prosecuting attackers.

Planning and Managing Web Servers

The most critical aspect of deploying a secure Web server is careful planning prior to installation, configuration, and deployment. Careful planning will ensure that the Web server is as secure as possible and in compliance with all relevant organizational policies. Many Web server security and performance problems can be traced to a lack of planning or management controls. The importance of management controls cannot be overstated. In many organizations, the IT support structure is highly fragmented. This fragmentation leads to inconsistencies, and these inconsistencies can lead to security vulnerabilities and other issues.

Web Server Installation and Deployment Planning

Security should be considered from the initial planning stage at the beginning of the systems development life cycle to maximize security and minimize costs.

It is much more difficult and expensive to address security after deployment and implementation. Organizations are more likely to make decisions about configuring hosts appropriately and consistently if they begin by developing and using a detailed, well designed deployment plan. Developing such a plan enables organizations to make informed tradeoff decisions between usability and performance, and risk.

A deployment plan allows organizations to maintain secure configurations and aids in identifying security vulnerabilities, which often manifest themselves as deviations from the plan.

In the planning stages of a Web server, the following items should be considered:

  • Identify the purpose(s) of the Web server.
  • What information categories will be stored on the Web server?
  • What information categories will be processed on or transmitted through the Web server?
  • What are the security requirements for this information?
  • Will any information be retrieved from or stored on another host (e.g., back-end database, mail server)?
  • What are the security requirements for any other hosts involved (e.g., back-end database, directory server, mail server, proxy server)?
  • What other service(s) will be provided by the Web server (in general, dedicating the host to being only a Web server is the most secure option)?
  • What are the security requirements for these additional services?
  • What are the requirements for continuity of services provided by Web servers, such as those specified in continuity of operations plans and disaster recovery plans?
  • Where on the network will the Web server be located?
  • Identify the network services that will be provided on the Web server, such as those supplied through the following protocols:
  • HTTP
  • HTTPS
  • Internet Caching Protocol (ICP)
  • Hyper Text Caching Protocol (HTCP)
  • Web Cache Coordination Protocol (WCCP)
  • SOCKS (Sockets)
  • Database services (e.g., Open Database Connectivity [ODBC]).
  • Identify any network service software, both client and server, to be installed on the Web server and any other support servers.
  • Identify the users or categories of users of the Web server and any support hosts.
  • Determine the privileges that each category of user will have on the Web server and support hosts.
  • Determine how the Web server will be managed (e.g., locally, remotely from the internal network, remotely from external networks).
  • Decide if and how users will be authenticated and how authentication data will be protected.
  • Determine how appropriate access to information resources will be enforced.
  • Determine which Web server applications meet the organization’s requirements. Consider servers that may offer greater security, albeit with less functionality in some instances. Some issues to consider include –
  • Cost -Compatibility with existing infrastructure -Knowledge of existing employees -Existing manufacturer relationship -Past vulnerability history -Functionality.
  • Work closely with manufacturer(s) in the planning stage.

The choice of Web server application may determine the choice of OS. However, to the degree possible, Web server administrators should choose an OS that provides the following:

  • Ability to restrict administrative or root level activities to authorized users only
  • Ability to control access to data on the server
  • Ability to disable unnecessary network services that may be built into the OS or server software
  • Ability to control access to various forms of executable programs, such as Common Gateway Interface (CGI) scripts and server plug-ins in the case of Web servers
  • Ability to log appropriate server activities to detect intrusions and attempted intrusions
  • Provision of a host-based firewall capability.

In addition, organizations should consider the availability of trained, experienced staff to administer the server and server products. Many organizations have learned the difficult lesson that a capable and experienced administrator for one type of operating environment is not automatically as effective for another.

Although many Web servers do not host sensitive information, most Web servers should be considered sensitive because of the damage to the organization’s reputation that could occur if the servers’ integrity is compromised. In such cases, it is critical that the Web servers are located in areas that provide secure physical environments. When planning the location of a Web server, the following issues should be considered:

  • Are the appropriate physical security protection mechanisms in place? Examples include
  • Locks
  • Card reader access
  • Security guards
  • Physical IDSs (e.g., motion sensors, cameras).
  • Are there appropriate environmental controls so that the necessary humidity and temperature are maintained?
  • Is there a backup power source? For how long will it provide power?
  • If high availability is required, are there redundant Internet connections from at least two different Internet service providers (ISP)?
  • If the location is subject to known natural disasters, is it hardened against those disasters and/or is there a contingency site outside the potential disaster area?

Web Server Security Management Staff

Because Web server security is tightly intertwined with the organization’s general information system security posture, a number of IT and system security staff may be interested in Web server planning, implementation, and administration. This section provides a list of generic roles and identifies their responsibilities as they relate to Web server security. These roles are for the purpose of discussion and may vary by organization.

Web Server and Network Administrators

Web server administrators are system architects responsible for the overall design, implementation, and maintenance of a Web server.

Network administrators are responsible for the overall design, implementation, and maintenance of a network. On a daily basis, Web server and network administrators contend with the security requirements of the specific system(s) for which they are responsible.

Security issues and solutions can originate from either outside (e.g., security patches and fixes from the manufacturer or computer security incident response teams) or within the organization (e.g., the security office).

The administrators are responsible for the following activities associated with Web servers:

  • Installing and configuring systems in compliance with the organizational security policies and standard system and network configurations
  • Maintaining systems in a secure manner, including frequent backups and timely application of patches
  • Monitoring system integrity, protection levels, and security-related events
  • Following up on detected security anomalies associated with their information system resources
  • Conducting security tests as required.

Web Application Developers

Web application developers are responsible for the look, functionality, performance, and security of the Web content and Web-based applications they create. Threats are increasingly directed at applications instead of the underlying Web server software and OSs. Unless Web application developers ensure that their code takes security into consideration, the Web server’s security will be weak no matter how well the server itself and the supporting infrastructure are secured. Web application developers should ensure the applications they implement have the following characteristics:

  • Supports a secure authentication, authorization, and access control mechanism as required.
  • Performs input validation so that the application’s security mechanisms cannot be bypassed when a malicious user tampers with data he or she sends to the application, including HTTP requests, headers, query strings, cookies, form fields, and hidden fields.
  • Processes errors in a secure manner so as not to lead to exposure of sensitive implementation information.
  • Protects sensitive information processed and/or stored by the application. Inadequate protection can allow data tampering and access to confidential information such as usernames, passwords, and credit card numbers.
  • Maintains its own application-specific logs. In many instances, Web server logging is not sufficient to track what a user does at the application level, requiring the application to maintain its own logs. Insufficient logging details can lead to a lack of knowledge about possible intrusions and an inability to verify a user’s actions (both legitimate and malicious).
  • Is “hardened” against application-level DoS attacks. Although DoS attacks are most frequently targeted at the network and transport layers, the application itself can be a target. If a malicious user can monopolize a required application or system resource, legitimate users can be prevented from using the system.


Learn more
comments powered by Disqus