Why Firewall Security is Necessary to Protect your Network
In your car, the firewall sits between the engine compartment and the front seat and is built to keep you from being burned by the heat of the combustion process. Your computer has a firewall, too, for much the same reason – to keep you and your data from being burned by hackers and thieves who are the unfortunate creators of “Internet combustion” and destruction.
The firewall, a “combo” approach of software that regulates and monitors hardware and communications protocols, is there to inspect network traffic and all the “packets” of information that pass through to your inner sanctum, your CPU and hard drives. A firewall will rule out the possibility of harm, or at least greatly minimize, by noting and quarantining potentially harmful “zones” and will either deny or permit access to your computer based on the current set of rules that applies at the time, depending on many (very many) factors.
Basic tasks and settings
The basic task for a firewall is to regulate of the flow of traffic between different computer networks that have different “trust levels.” The Internet is full of countless overlapping zones, some safe and some totally deadly. On the other hand, internal networks are more likely to contain a zone or zones that offer a bit more trust. Zones that are in between the two, or are hard to categorize, are sometimes referred to as “perimeter networks” or, in a bit of geek humor, Demilitarized Zones (DMZ).
Without proper configuration, a firewall can simply become another worthless tool. Standard security practices call for a "default-deny" firewall rule, meaning that the only network connections that are allowed are the ones that have been explicitly okayed, after due investigation. Unfortunately, such a setup requires detailed understanding of network applications and a great deal of time and energy to establish and administer.
Who can do what?
Many businesses and individuals lack sufficient computer and network knowledge to set up a default-deny firewall, and will therefore use a riskier but simpler "default-allow" rule, in which all traffic is permitted unless it has been specifically blocked for one of a number of possible reasons. This way of setting up a firewall makes “mysterious” and unplanned network connections possible, and the chance your system may be compromised becomes much more likely.
Firewall technology had its first growth period in the computer technology revolution of the late 1980s, when the Internet was a fairly new in terms of its global reach and connectivity options. The predecessors to today’s hardware/software hybrid firewalls were the routers used in the mid 1980s to physically separate networks from each other. However small the Internet began, it was ultimately undone by supremely fast growth and the lack of security planning, and therefore there were the inevitable breaches caused by older (“prehistoric”) firewall formats. Fortunately, computer pros learn from their errors, and the firewall technology continues improving daily.
Fundamentals of computer networks, cloud computing,wireless networking, peer to peer network,proxies
Showing posts with label network. Show all posts
Showing posts with label network. Show all posts
Tuesday, December 28, 2010
The Fundamentals of Fiber Optic Cable Management
The Fundamentals of Fiber Optic Cable Management
Fiber optic cables are used frequently for today’s telecommunication network because of their high bandwidth, high reliability and relatively low cost. To maximize the network performance, a good cable fiber management system must be in place.
There are four fundamental principles for a good fiber cable management system:
1. Bend radius reduction
Fiber bends beyond the specified minimum bending radius can cause signal loss or even break the fiber, causing service disruption.
Today, industry standards for traditional singlemode jumpers typically specify a minimum bend radius of
ten times the outside diameter of the jacketed cable or 1.5" (38 mm), whichever is greater. This new
breed of flexible singlemode optical fiber has the potential to significantly reduce these minimum bend
radius requirements to values as low as 0.6" (15 mm), depending on the cable configuration, without increasing attenuation.
A reduced bend radius fiber is able to withstand tighter bends within frames, panels and pathways. It also enhances the reliability of a network and reduces network down time.
2. Well defined cable routing paths
The major reason of optical fiber cable minimum bend radius violation is improper routing of fibers by fiber installation technicians.
Routing paths should be clearly pre-defined and easy to follow. In fact, these paths should be designed so that the technician has no other option than to route the cables properly. If an option is given to technician, inconsistent human decision could cause improper routing, and causes bend radius violation. Well defined routing paths can standardize fiber optic installation process, and less training time is required for fiber technicians.
3. Easy access to installed optical fibers
Allowing easy access to installed fiber cable is essential for maintaining proper bend radius protection. The system should be designed to ensure that individual fibers can be installed or removed easily without negative effects on nearby fiber cable.
4. Physical protection of installed optical fibers
The management system must provide measures to physically protect fiber cables from accidental damage by technicians and equipments. Otherwise, the network reliability and performance will be adversely affected.
Fiber optic cables are used frequently for today’s telecommunication network because of their high bandwidth, high reliability and relatively low cost. To maximize the network performance, a good cable fiber management system must be in place.
There are four fundamental principles for a good fiber cable management system:
1. Bend radius reduction
Fiber bends beyond the specified minimum bending radius can cause signal loss or even break the fiber, causing service disruption.
Today, industry standards for traditional singlemode jumpers typically specify a minimum bend radius of
ten times the outside diameter of the jacketed cable or 1.5" (38 mm), whichever is greater. This new
breed of flexible singlemode optical fiber has the potential to significantly reduce these minimum bend
radius requirements to values as low as 0.6" (15 mm), depending on the cable configuration, without increasing attenuation.
A reduced bend radius fiber is able to withstand tighter bends within frames, panels and pathways. It also enhances the reliability of a network and reduces network down time.
2. Well defined cable routing paths
The major reason of optical fiber cable minimum bend radius violation is improper routing of fibers by fiber installation technicians.
Routing paths should be clearly pre-defined and easy to follow. In fact, these paths should be designed so that the technician has no other option than to route the cables properly. If an option is given to technician, inconsistent human decision could cause improper routing, and causes bend radius violation. Well defined routing paths can standardize fiber optic installation process, and less training time is required for fiber technicians.
3. Easy access to installed optical fibers
Allowing easy access to installed fiber cable is essential for maintaining proper bend radius protection. The system should be designed to ensure that individual fibers can be installed or removed easily without negative effects on nearby fiber cable.
4. Physical protection of installed optical fibers
The management system must provide measures to physically protect fiber cables from accidental damage by technicians and equipments. Otherwise, the network reliability and performance will be adversely affected.
Internet Access Types
Internet Access Types
DSL
Digital Subscribers Lines is an advanced technology for bringing high speed internet connection to the home and corporate users. DSL doesn’t require the new wiring because it can be used on the regular telephone lines. With DSL you can use your internet connect and use telephone for making phone calls at the same time.
ADSL
ADSL (Asynchronous Digital Subscribers Lines) is a high speed internet connection that is used to send and receive data at very high speed over the conventional telephone lines. ADSL supports data rate of 1.5 MBPS to 9 MBPS when receiving data (downstream) and 16 to 440 KBPS when sending data over internet (upstream)
Cable Net
Cable modems are copper wire are used to high speed access to the internet. Coaxial cable is used by the TV provides much greater bandwidth than the regular telephone lines. Cable modem provides the broadband internet access. Cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking with some modifications. Some cable modem devices use Router to provide local area network with its own IP addressing. Some of the major manufacturers of the cable modem are Cisco, D-Link, Linksys, Motorola, Ericsson, Nortel Networks and 3Com.
Dial Up
Dial up communication is a type of internet access that works on the regular telephone lines. The computer is granted internet access by connecting the telephone line with the modem in the computer and configuring the computer with user name password and dial up numbers provided by the local ISP. Dial up service is least expensive but also provide the lowest internet speed. The dial up connection can be used with two types of modems internet modem and external modem.
GPRS
GPRS General Packet Radio Service is a series of functionalities that allow mobile data streaming and transfer to users of Global System. GPRS also called as 2.5 G. GPRS allows multiple users to share communication channel. GPRS facilitates the functionalities of web browsing, SMS, multimedia messages and real time email reception etc.
WiMAX
WiMAX stands for worldwide interoperability for Microwave access. WiMAX provides very high speed broadband internet connection to the home users, corporate users and the roaming users over wireless connection. WiMAX allows the data, voice and video communication at the same time. WiMAX connection can also be bridged and routed with the wired or wireless LAN. WiMAX provides data rate up to 70 mbps.
Satellite Internet access
Satellite Internet services are used in the locations where terrestrial internet access is not available. Satellite broadband is linked to the dish network subscriber service and provides data communication speed at the same rate of other broadband technologies. Two way satellite internet consists of two foot by three foot dish, two modems for uplink and downlink and coaxial cable between dish and modem.
DSL
Digital Subscribers Lines is an advanced technology for bringing high speed internet connection to the home and corporate users. DSL doesn’t require the new wiring because it can be used on the regular telephone lines. With DSL you can use your internet connect and use telephone for making phone calls at the same time.
ADSL
ADSL (Asynchronous Digital Subscribers Lines) is a high speed internet connection that is used to send and receive data at very high speed over the conventional telephone lines. ADSL supports data rate of 1.5 MBPS to 9 MBPS when receiving data (downstream) and 16 to 440 KBPS when sending data over internet (upstream)
Cable Net
Cable modems are copper wire are used to high speed access to the internet. Coaxial cable is used by the TV provides much greater bandwidth than the regular telephone lines. Cable modem provides the broadband internet access. Cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking with some modifications. Some cable modem devices use Router to provide local area network with its own IP addressing. Some of the major manufacturers of the cable modem are Cisco, D-Link, Linksys, Motorola, Ericsson, Nortel Networks and 3Com.
Dial Up
Dial up communication is a type of internet access that works on the regular telephone lines. The computer is granted internet access by connecting the telephone line with the modem in the computer and configuring the computer with user name password and dial up numbers provided by the local ISP. Dial up service is least expensive but also provide the lowest internet speed. The dial up connection can be used with two types of modems internet modem and external modem.
GPRS
GPRS General Packet Radio Service is a series of functionalities that allow mobile data streaming and transfer to users of Global System. GPRS also called as 2.5 G. GPRS allows multiple users to share communication channel. GPRS facilitates the functionalities of web browsing, SMS, multimedia messages and real time email reception etc.
WiMAX
WiMAX stands for worldwide interoperability for Microwave access. WiMAX provides very high speed broadband internet connection to the home users, corporate users and the roaming users over wireless connection. WiMAX allows the data, voice and video communication at the same time. WiMAX connection can also be bridged and routed with the wired or wireless LAN. WiMAX provides data rate up to 70 mbps.
Satellite Internet access
Satellite Internet services are used in the locations where terrestrial internet access is not available. Satellite broadband is linked to the dish network subscriber service and provides data communication speed at the same rate of other broadband technologies. Two way satellite internet consists of two foot by three foot dish, two modems for uplink and downlink and coaxial cable between dish and modem.
Virtual Private Network (VPN) Technology
Virtual Private Network (VPN) Technology
The proliferation of network users, accessibility, flexibility, and cost effectiveness of Wide Area Network (WAN)/Internet connections have increased the need for affordable and secure communications. Virtual Private Network (VPN) technology has become a preferred technology due to the security levels it provides during transmission of data.
VPN Networks are primarily extended private networks comprising links across a shared public telecommunication infrastructure such as the Internet. In a VPN system, data is transmitted between two computers over the public network emulating a point-to-point link. Data packets are encrypted at the sending end and decrypted at the receiving end. Due to the encryption and authentication of IP packets sent over VPN networks, the data even if intercepted, is impossible for hackers to decipher without the encryption keys. VPN technologies maintain security and privacy through using tunneling protocols and security procedures. In VPN networking, VPN LAN can take up various forms by combining different hardware and software technologies. VPN LANs are those connections between a remote local area network (LAN) and a private network. VPN systems work in multiple environments and related technology to render secure solutions.
The tunneling protocol, also called Encapsulation protocol is a network technology that includes establishing and maintaining a logical network connection. The most widely used VPN tunneling protocols are, Layer Two Tunneling Protocol (L2TP), IP Security (IP Sec), Point-to-Point Tunneling Protocol (PPTP), Secure Sockets Layer/Transport Layer Security (SSL/TS), Open VPN, Layer 2 Tunneling Protocol version 3 (L2TPv3), VPN Quarantine, and Multi Path Virtual Private Network (MPVPN). VPN technology supports two types of tunneling – voluntary tunneling, where the VPN connection set up is managed by VPN client, and compulsory tunneling, where the VPN connection set up is managed by the network provider. In tunneling, data packets are encapsulated within IP packets and then transmitted across the Internet. On reception of data at the receiving network end, the encapsulated packet is stripped from the IP packet in order to obtain the original message packet.
Trusted VPNs and Secure VPNs are two major VPN technologies that secure and improve VPN performance. While Secure VPNs utilize cryptographic tunneling, trusted VPN networks depend only on the single provider’s network traffic to protect data. Trusted VPNs comprise Multi-Protocol Label Switching (MPLS), a technology that is frequently used to overlay VPNs with Quality of Service (QoS) across a trusted delivery network, and Layer 2 tunneling protocol, which takes on the characteristics of two proprietary VPN protocols.
Cryptographic tunneling protocols are used by secure VPNs to provide privacy to networks through encryption, authentication, and message integrity. In this advanced technique, there are options to block snooping through packet sniffing, block spoofing of identity and altering of messages. By implementing and operating the right secure VPN protocols it is possible to provide secure communications over insecure networks and considerably improve VPN performance.
The popular VPN tunneling protocols are Internet Protocol Security (IPSec), ?Point-to-Point Tunneling Protocol (PPTP), and ?Layer2 Tunneling Protocol (L2TP). Internet Protocol Security (IPSec) is a widely used and standardized VPN protocol that is most preferred due to its interoperability benefits. IPSec is an open standards framework consisting secure protocol suite that can be run on an existing IP connection. This VPN protocol operates at layer 3 of the OSI model. It provides data authentication and encryption, and can be implemented on any device communicating over IP. IPSec protects all data traffic carried over by IP. It also provides encryption and authentication for non-IP traffic by concurrently operating with Layer 2 tunneling protocols. The three major components incorporated in IPSec are: Authentication Header (AH), Encapsulating Security Payload (ESP), and Internet Key Exchange (IKE). The authentication header that is added after the IP header provides authentication at the packet level, and ensures that the data packets are not meddled with along the route. ESP gives confidentiality and authentication of data origin.
Point-to-Point Tunneling Protocol (PPTP) is Microsoft’s proprietary development that is used in VPN Networking and communications. It authenticates users through employing authentication protocols (MS-CHAP, CHAP, SPAP, and PAP). Although PPTP has ease of use, it is still not very flexible solution and is not interoperable like other VPN protocols. The communication types of PPTP are: PPTP connection (PPP link is established to an ISP by the client), PPTP control connection (PPTP connection to the server is created by the user), and PPTP data tunnel (communication is exchanged between client and server within an encrypted tunnel). PPTP is generally employed to secure communication channels between many Windows hosts on the internal network.
The Layer 2 Tunneling Protocol (L2TP) tunnels Point-to-Point protocol (PPP) across a public IP network. It operates on layer 2, enabling non-IP protocols to be transported through the VPN tunnel and also works on Layer 2 components such as ATM, frame relay, etc. L2TP can provide encryption service in conjunction with other protocols or encryption mechanisms.
Technological advancements have made businesses to look for enhancements to secure their networks and business communications. In the line of VPN technology, there is an influx of VPN products that occupy the marketplace. Customer compare VPN products based on functionality and flexibility and employ the best of technology. VPN comparison or even comparison of technological products would open a wide array of choices to select. It depends on the set of requirements of the customer to match with the appropriate technology gadgets for effective use.
The proliferation of network users, accessibility, flexibility, and cost effectiveness of Wide Area Network (WAN)/Internet connections have increased the need for affordable and secure communications. Virtual Private Network (VPN) technology has become a preferred technology due to the security levels it provides during transmission of data.
VPN Networks are primarily extended private networks comprising links across a shared public telecommunication infrastructure such as the Internet. In a VPN system, data is transmitted between two computers over the public network emulating a point-to-point link. Data packets are encrypted at the sending end and decrypted at the receiving end. Due to the encryption and authentication of IP packets sent over VPN networks, the data even if intercepted, is impossible for hackers to decipher without the encryption keys. VPN technologies maintain security and privacy through using tunneling protocols and security procedures. In VPN networking, VPN LAN can take up various forms by combining different hardware and software technologies. VPN LANs are those connections between a remote local area network (LAN) and a private network. VPN systems work in multiple environments and related technology to render secure solutions.
The tunneling protocol, also called Encapsulation protocol is a network technology that includes establishing and maintaining a logical network connection. The most widely used VPN tunneling protocols are, Layer Two Tunneling Protocol (L2TP), IP Security (IP Sec), Point-to-Point Tunneling Protocol (PPTP), Secure Sockets Layer/Transport Layer Security (SSL/TS), Open VPN, Layer 2 Tunneling Protocol version 3 (L2TPv3), VPN Quarantine, and Multi Path Virtual Private Network (MPVPN). VPN technology supports two types of tunneling – voluntary tunneling, where the VPN connection set up is managed by VPN client, and compulsory tunneling, where the VPN connection set up is managed by the network provider. In tunneling, data packets are encapsulated within IP packets and then transmitted across the Internet. On reception of data at the receiving network end, the encapsulated packet is stripped from the IP packet in order to obtain the original message packet.
Trusted VPNs and Secure VPNs are two major VPN technologies that secure and improve VPN performance. While Secure VPNs utilize cryptographic tunneling, trusted VPN networks depend only on the single provider’s network traffic to protect data. Trusted VPNs comprise Multi-Protocol Label Switching (MPLS), a technology that is frequently used to overlay VPNs with Quality of Service (QoS) across a trusted delivery network, and Layer 2 tunneling protocol, which takes on the characteristics of two proprietary VPN protocols.
Cryptographic tunneling protocols are used by secure VPNs to provide privacy to networks through encryption, authentication, and message integrity. In this advanced technique, there are options to block snooping through packet sniffing, block spoofing of identity and altering of messages. By implementing and operating the right secure VPN protocols it is possible to provide secure communications over insecure networks and considerably improve VPN performance.
The popular VPN tunneling protocols are Internet Protocol Security (IPSec), ?Point-to-Point Tunneling Protocol (PPTP), and ?Layer2 Tunneling Protocol (L2TP). Internet Protocol Security (IPSec) is a widely used and standardized VPN protocol that is most preferred due to its interoperability benefits. IPSec is an open standards framework consisting secure protocol suite that can be run on an existing IP connection. This VPN protocol operates at layer 3 of the OSI model. It provides data authentication and encryption, and can be implemented on any device communicating over IP. IPSec protects all data traffic carried over by IP. It also provides encryption and authentication for non-IP traffic by concurrently operating with Layer 2 tunneling protocols. The three major components incorporated in IPSec are: Authentication Header (AH), Encapsulating Security Payload (ESP), and Internet Key Exchange (IKE). The authentication header that is added after the IP header provides authentication at the packet level, and ensures that the data packets are not meddled with along the route. ESP gives confidentiality and authentication of data origin.
Point-to-Point Tunneling Protocol (PPTP) is Microsoft’s proprietary development that is used in VPN Networking and communications. It authenticates users through employing authentication protocols (MS-CHAP, CHAP, SPAP, and PAP). Although PPTP has ease of use, it is still not very flexible solution and is not interoperable like other VPN protocols. The communication types of PPTP are: PPTP connection (PPP link is established to an ISP by the client), PPTP control connection (PPTP connection to the server is created by the user), and PPTP data tunnel (communication is exchanged between client and server within an encrypted tunnel). PPTP is generally employed to secure communication channels between many Windows hosts on the internal network.
The Layer 2 Tunneling Protocol (L2TP) tunnels Point-to-Point protocol (PPP) across a public IP network. It operates on layer 2, enabling non-IP protocols to be transported through the VPN tunnel and also works on Layer 2 components such as ATM, frame relay, etc. L2TP can provide encryption service in conjunction with other protocols or encryption mechanisms.
Technological advancements have made businesses to look for enhancements to secure their networks and business communications. In the line of VPN technology, there is an influx of VPN products that occupy the marketplace. Customer compare VPN products based on functionality and flexibility and employ the best of technology. VPN comparison or even comparison of technological products would open a wide array of choices to select. It depends on the set of requirements of the customer to match with the appropriate technology gadgets for effective use.
Cloud Computing and Security Issues
Cloud Computing and Security Issues
Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them. Cloud computing is a method of delivering hosted services -- Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – over the Internet in a fast, cost-effective way. The technology has gained popularity in a weakened economy as enterprises seek ways to save money, but as always, this emerging technology presents certain risks, and it could open an organization to security vulnerabilities and threats.
The concept generally incorporates combinations of infrastructure as a service (IaaS) , platform as a service (PaaS), software as a service (SaaS) . Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. Cloud computing can be confused with Grid computing which is a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks and Utility computing – the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity
Characteristics
Cloud computing customers do not generally own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, while others bill on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. Additionally, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sites.
Companies
Vmware, Sun Microsystems, IBM, Amazon, Google, Microsoft and Yahoo are some of the major cloud computing service providers. Cloud services are being adopted by individual users through large enterprises including vmware, General Electric and Procter & Gamble.
Cloud computing types
Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.
Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".
Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticised on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
Cloud Computing and Security Issues
The benefits of virtualization and cloud computing are transforming the way we look at IT outsourcing for development, testing, and production. Existing skills, processes, and projects seem to translate naturally to a virtualized environment, and few obstacles seem to impede the adoption of the cloud model for production. Practitioners and the media alike have touted the potential security issues of virtualization. The cloud brings with it a layer of additional security considerations, in terms of both technology and process.
This layer of additional security isn’t necessarily scary or complicated. But right now, trust in the security of cloud computing is the number one impediment to its growth. This article takes a look at the cloud from various points of view. I will compare real-world examples to look at security implications of the Cloud, and show how they integrate with traditional security processes.
Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them. Cloud computing is a method of delivering hosted services -- Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – over the Internet in a fast, cost-effective way. The technology has gained popularity in a weakened economy as enterprises seek ways to save money, but as always, this emerging technology presents certain risks, and it could open an organization to security vulnerabilities and threats.
The concept generally incorporates combinations of infrastructure as a service (IaaS) , platform as a service (PaaS), software as a service (SaaS) . Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. Cloud computing can be confused with Grid computing which is a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks and Utility computing – the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity
Characteristics
Cloud computing customers do not generally own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, while others bill on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. Additionally, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sites.
Companies
Vmware, Sun Microsystems, IBM, Amazon, Google, Microsoft and Yahoo are some of the major cloud computing service providers. Cloud services are being adopted by individual users through large enterprises including vmware, General Electric and Procter & Gamble.
Cloud computing types
Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.
Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".
Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticised on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
Cloud Computing and Security Issues
The benefits of virtualization and cloud computing are transforming the way we look at IT outsourcing for development, testing, and production. Existing skills, processes, and projects seem to translate naturally to a virtualized environment, and few obstacles seem to impede the adoption of the cloud model for production. Practitioners and the media alike have touted the potential security issues of virtualization. The cloud brings with it a layer of additional security considerations, in terms of both technology and process.
This layer of additional security isn’t necessarily scary or complicated. But right now, trust in the security of cloud computing is the number one impediment to its growth. This article takes a look at the cloud from various points of view. I will compare real-world examples to look at security implications of the Cloud, and show how they integrate with traditional security processes.
Setting Up a Small Business Computer Network
Setting Up a Small Business Computer Network
A computer network setup is an integral part of every organization today. The linkage of the different computers in the office space helps in the growth of the business by saving time and money. It speeds up the work pace and invariably saves a lot of time. Each office would have its own specific needs. One can approach network consulting firms with their requirements and have a computer network setup that caters specifically to their needs. Your computer must be used to its full potential and all your needs should be met with a good computer network system in place.
An effective computer network would mean a good network design to begin with. The network consulting firms can also suggest the best possible network design as per the nature of the job of the enterprise and the role of the computer network system in the work area. There are various kinds of computer networks like the LAN (Local area network) to WAN (Wide Area Network) and various other types. Each has its own particular role and use and the right one can be chosen depending upon the field of functioning. One can even have a computer network setup that is wireless. A wireless system is a more practical solution in recent times as people can be connected and still have the option of roaming freely with their laptops. A remote access arrangement is also a good networking option. In case of the traditional wired network one should be certain that the cabling done is of superior quality or else the problems that you may end up with are plenty. A system integration firm would be the best option to setup a good network in the office space.
The system integration firm will help integrate a new setup with the already existing setup in the organization. This would help save money for the company as you may not need to do away with the old network as the professionals would be able to find a way of combining the new technology with the already existing one. Apart from the different systems existing in the company it can all be brought together under one umbrella and this would help in extracting the complete picture all at once. All the subsystems are held as one and the integration process makes it as one.
A computer network that is setup with the aid of network consulting agents, using their expertise in putting up a network system in place helps in making the operation of the company smoother. An effective system integration process adds to the benefits of the computer network. All this put together triggers the growth process of the company. There are various firms offering services of network design and system integration. Do a proper study before hiring the services of any. Since this involves thorough knowledge and expertise of all the systems in place, you have to be certain that the job is assigned to a competent firm with sufficient experience and knowledge behind them.
A computer network setup is an integral part of every organization today. The linkage of the different computers in the office space helps in the growth of the business by saving time and money. It speeds up the work pace and invariably saves a lot of time. Each office would have its own specific needs. One can approach network consulting firms with their requirements and have a computer network setup that caters specifically to their needs. Your computer must be used to its full potential and all your needs should be met with a good computer network system in place.
An effective computer network would mean a good network design to begin with. The network consulting firms can also suggest the best possible network design as per the nature of the job of the enterprise and the role of the computer network system in the work area. There are various kinds of computer networks like the LAN (Local area network) to WAN (Wide Area Network) and various other types. Each has its own particular role and use and the right one can be chosen depending upon the field of functioning. One can even have a computer network setup that is wireless. A wireless system is a more practical solution in recent times as people can be connected and still have the option of roaming freely with their laptops. A remote access arrangement is also a good networking option. In case of the traditional wired network one should be certain that the cabling done is of superior quality or else the problems that you may end up with are plenty. A system integration firm would be the best option to setup a good network in the office space.
The system integration firm will help integrate a new setup with the already existing setup in the organization. This would help save money for the company as you may not need to do away with the old network as the professionals would be able to find a way of combining the new technology with the already existing one. Apart from the different systems existing in the company it can all be brought together under one umbrella and this would help in extracting the complete picture all at once. All the subsystems are held as one and the integration process makes it as one.
A computer network that is setup with the aid of network consulting agents, using their expertise in putting up a network system in place helps in making the operation of the company smoother. An effective system integration process adds to the benefits of the computer network. All this put together triggers the growth process of the company. There are various firms offering services of network design and system integration. Do a proper study before hiring the services of any. Since this involves thorough knowledge and expertise of all the systems in place, you have to be certain that the job is assigned to a competent firm with sufficient experience and knowledge behind them.
File sharing over the WAN - Storage Networking
File sharing over the WAN - Storage Networking
File sharing over local area networks (LANs) has become an integral part of enterprise computing. Over 20 years ago, personal computer users realized the benefits of abandoning "sneakernet" by connecting individual computers into a network. This allowed more efficient file sharing and project collaboration for members of a workgroup who were only a cable length away, typically in the same room or building. Now there is a desire for global organizations to share file data in real time among all their locations.
These global organizations now wish to implement wide area network (WAN) based storage consolidation, to reduce storage resource management costs while also enabling remote employees to collaborate on projects just as if they were all on a local network. Given the current state of technology, however, enterprises are forced to limit the deployment of real time file sharing and networks to within each LAN, resulting in disconnected islands of file sharing throughout the enterprise.
Corporations have tried to fix this problem in various ways. For large files or large groups of files, companies may file transfer protocol (FTP) the files overnight. For ad hoc collaboration between employees, people might attach files to email messages. While these solutions are marginally acceptable, they each suffer from the same problem. The file sharing is not in real time with everyone working off the same version of the data. Rather than sharing a single copy among many users, these methods duplicate files and propagate private copies to each user. The moment a file is emailed or FTP'ed, two or more potentially out of sync copies are created. The dissimilar versions need to be manually reconciled, usually after every set of revisions by the people working on the project.
File sharing over local area networks (LANs) has become an integral part of enterprise computing. Over 20 years ago, personal computer users realized the benefits of abandoning "sneakernet" by connecting individual computers into a network. This allowed more efficient file sharing and project collaboration for members of a workgroup who were only a cable length away, typically in the same room or building. Now there is a desire for global organizations to share file data in real time among all their locations.
These global organizations now wish to implement wide area network (WAN) based storage consolidation, to reduce storage resource management costs while also enabling remote employees to collaborate on projects just as if they were all on a local network. Given the current state of technology, however, enterprises are forced to limit the deployment of real time file sharing and networks to within each LAN, resulting in disconnected islands of file sharing throughout the enterprise.
Corporations have tried to fix this problem in various ways. For large files or large groups of files, companies may file transfer protocol (FTP) the files overnight. For ad hoc collaboration between employees, people might attach files to email messages. While these solutions are marginally acceptable, they each suffer from the same problem. The file sharing is not in real time with everyone working off the same version of the data. Rather than sharing a single copy among many users, these methods duplicate files and propagate private copies to each user. The moment a file is emailed or FTP'ed, two or more potentially out of sync copies are created. The dissimilar versions need to be manually reconciled, usually after every set of revisions by the people working on the project.
What is Home Wireless Networking?
What is Home Wireless Networking?
Home wireless networking is just what it sounds like -- a way of creating networks without any wires within your home! If this sounds exciting to you, then read on.
With a home wireless network, you can create radio connections between computers that let them communicate and connect to the Internet without you having to go to all the trouble of connecting them with wires. The computers don't even need to have a clear path for the signal, as the wireless signal can go through walls and between floors easily.
Where Did It Come From?
The story of wireless networking is a rather strange one. It is basically an application of a technology called frequency hopping which was, believe it or not, invented by the actress Hedy Lamarr and a musician named George Antheil, back in the 1940s. Seriously, do a web search -- I promise I'm not pulling your leg here.
They received a patent for their invention, which was intended to help in the war effort. Hedy was Jewish, but had been made to hide it and socialise with Hitler as a young woman -- she had to drug her husband and run away to London to escape her native Austria. The importance of what they'd done, however, wasn't recognised until many years later.
The U.S. military adopted the technique in the '60s, using it during the Cuban Missile Crisis. Hedy never saw any money from it as the patent had expired (don't worry, she was a film star!), but she was given a Pioneer Award by the Electronic Frontier Foundation in 1997, three years before her death.
Wireless Network at Home.
When most people talk about wireless networks, they are talking about wireless LANs (local area networks). A local area network doesn't mean that it covers your whole neighbourhood -- the 'local area' in question can be only one building, such as your house. So if you want wireless networking in your home, you want a home wireless LAN.
Once people have wireless in their home, they always seem to act as if there's been an absolute miracle. After years of drilling holes in the walls and running wires all over the place, suddenly seeing them gone is really amazing.
Home wireless networking is just what it sounds like -- a way of creating networks without any wires within your home! If this sounds exciting to you, then read on.
With a home wireless network, you can create radio connections between computers that let them communicate and connect to the Internet without you having to go to all the trouble of connecting them with wires. The computers don't even need to have a clear path for the signal, as the wireless signal can go through walls and between floors easily.
Where Did It Come From?
The story of wireless networking is a rather strange one. It is basically an application of a technology called frequency hopping which was, believe it or not, invented by the actress Hedy Lamarr and a musician named George Antheil, back in the 1940s. Seriously, do a web search -- I promise I'm not pulling your leg here.
They received a patent for their invention, which was intended to help in the war effort. Hedy was Jewish, but had been made to hide it and socialise with Hitler as a young woman -- she had to drug her husband and run away to London to escape her native Austria. The importance of what they'd done, however, wasn't recognised until many years later.
The U.S. military adopted the technique in the '60s, using it during the Cuban Missile Crisis. Hedy never saw any money from it as the patent had expired (don't worry, she was a film star!), but she was given a Pioneer Award by the Electronic Frontier Foundation in 1997, three years before her death.
Wireless Network at Home.
When most people talk about wireless networks, they are talking about wireless LANs (local area networks). A local area network doesn't mean that it covers your whole neighbourhood -- the 'local area' in question can be only one building, such as your house. So if you want wireless networking in your home, you want a home wireless LAN.
Once people have wireless in their home, they always seem to act as if there's been an absolute miracle. After years of drilling holes in the walls and running wires all over the place, suddenly seeing them gone is really amazing.
Network Management & Server Management
Network Management & Server Management
1) What Is Network Management?
Network management means different things to different people. In some cases, it involves a solitary network consultant monitoring network activity with an outdated protocol analyzer. In other cases, network management involves a distributed database, auto polling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks.
The ISO network management model's five functional areas are listed below.
Fault Management—Detect, isolate, notify, and correct faults encountered in the network.
Configuration Management—Configuration aspects of network devices such as configuration file management, inventory management, and software management.
Performance Management—Monitor and measure various aspects of performance so that overall performance can be maintained at an acceptable level.
Security Management—Provide access to network devices and corporate resources to authorized individuals.
Accounting Management—Usage information of network resources.
Any number or combination of network management disciplines can be outsourced including:
helpdesk and break-fix services
fault management
configuration management
change management
asset management
performance management
service level management
Your computers and networks need regular care to perform at their optimal level. As your IT manager, Northwest Computer Support monitors your computers and network health on a daily basis. Having this information allows us to proactively maintain your network and provide strategic guidance before trouble occurs.
2) What is Server Management? Server management is the maintenance and operation of a server. While this can mean many things, the main idea behind server management is uptime. The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose.
The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose. Different types of servers require different types of management plans.
Types of Server Management
Server Monitoring
Asset Management
Case Management
License Management
Server Optimization
Backup Monitoring
1) What Is Network Management?
Network management means different things to different people. In some cases, it involves a solitary network consultant monitoring network activity with an outdated protocol analyzer. In other cases, network management involves a distributed database, auto polling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks.
The ISO network management model's five functional areas are listed below.
Fault Management—Detect, isolate, notify, and correct faults encountered in the network.
Configuration Management—Configuration aspects of network devices such as configuration file management, inventory management, and software management.
Performance Management—Monitor and measure various aspects of performance so that overall performance can be maintained at an acceptable level.
Security Management—Provide access to network devices and corporate resources to authorized individuals.
Accounting Management—Usage information of network resources.
Any number or combination of network management disciplines can be outsourced including:
helpdesk and break-fix services
fault management
configuration management
change management
asset management
performance management
service level management
Your computers and networks need regular care to perform at their optimal level. As your IT manager, Northwest Computer Support monitors your computers and network health on a daily basis. Having this information allows us to proactively maintain your network and provide strategic guidance before trouble occurs.
2) What is Server Management? Server management is the maintenance and operation of a server. While this can mean many things, the main idea behind server management is uptime. The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose.
The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose. Different types of servers require different types of management plans.
Types of Server Management
Server Monitoring
Asset Management
Case Management
License Management
Server Optimization
Backup Monitoring
What is a Peer To Peer Network?
What is a Peer To Peer Network?
There are lots of different uses for a p2p network, and mostly they're good uses. At lot of people are looking for free mp3 downloads or trying to get pirated or copyrighted movies or music for free. Probably the most famous examples of this was the Napster network. Whether it was the first p2p network or not is debatable, but it was definitely the most popular and widley known about p2p network. Napster basically brought the concept of illegal sharing of mp3 music to the mainstream.
p2p networks have several characteristics which make it a great network configuration, but also make it attractive to people who are looking to use it for less than admirable reasons. One of those characteristics is that a p2p network is what you would call 'distributed.' A distributed network is one where your information, data, or hosts and nodes in the network are not centrally located. They're not all in one place; they're spread all over, or geographically disperse. So what? What makes disperse nodes in a network a good thing? There are several reasons. First, you can tell your node in the network to look for your information at the closest available node with the data you want. That way, you ensure the fastest possible transfer rate and lowest latency. It also means a p2p network is resilient to outages. One node goes down, get your download somewhere else.
Now to really complicate things you can break apart large files into smaller pieces or fragments. This is beneficial because then you can simultaneously download multiple parts of the same file! Now you're really smoking with your download speeds. If you have a bunch of stuff to share or your getting free mp3 downloads from the internet, using a p2p network works really well.
There is one final characteristic about a p2p network that makes it good for illegal file sharing or downloading mp3s or movies and stuff. Anonymity. In most p2p networks, data is not stored centrally, this means there's no 'big brother' looking over all the traffic between nodes in the network. In some p2p networks each client maintains a small database which lists the available files and the nodes where those files are available. So each client or node will just connect directly to the clients that have the files they want. Who's going to monitor that and say you're not allowed to do that? Or to check and see that the files you're sharing aren't protected by copyright or something? It's very difficult to monitor and control p2p networks.
There are lots of different uses for a p2p network, and mostly they're good uses. At lot of people are looking for free mp3 downloads or trying to get pirated or copyrighted movies or music for free. Probably the most famous examples of this was the Napster network. Whether it was the first p2p network or not is debatable, but it was definitely the most popular and widley known about p2p network. Napster basically brought the concept of illegal sharing of mp3 music to the mainstream.
p2p networks have several characteristics which make it a great network configuration, but also make it attractive to people who are looking to use it for less than admirable reasons. One of those characteristics is that a p2p network is what you would call 'distributed.' A distributed network is one where your information, data, or hosts and nodes in the network are not centrally located. They're not all in one place; they're spread all over, or geographically disperse. So what? What makes disperse nodes in a network a good thing? There are several reasons. First, you can tell your node in the network to look for your information at the closest available node with the data you want. That way, you ensure the fastest possible transfer rate and lowest latency. It also means a p2p network is resilient to outages. One node goes down, get your download somewhere else.
Now to really complicate things you can break apart large files into smaller pieces or fragments. This is beneficial because then you can simultaneously download multiple parts of the same file! Now you're really smoking with your download speeds. If you have a bunch of stuff to share or your getting free mp3 downloads from the internet, using a p2p network works really well.
There is one final characteristic about a p2p network that makes it good for illegal file sharing or downloading mp3s or movies and stuff. Anonymity. In most p2p networks, data is not stored centrally, this means there's no 'big brother' looking over all the traffic between nodes in the network. In some p2p networks each client maintains a small database which lists the available files and the nodes where those files are available. So each client or node will just connect directly to the clients that have the files they want. Who's going to monitor that and say you're not allowed to do that? Or to check and see that the files you're sharing aren't protected by copyright or something? It's very difficult to monitor and control p2p networks.
Subscribe to:
Comments (Atom)