Tuesday, December 28, 2010

Why Firewall Security is Necessary to Protect your Network

Why Firewall Security is Necessary to Protect your Network

In your car, the firewall sits between the engine compartment and the front seat and is built to keep you from being burned by the heat of the combustion process. Your computer has a firewall, too, for much the same reason – to keep you and your data from being burned by hackers and thieves who are the unfortunate creators of “Internet combustion” and destruction.

The firewall, a “combo” approach of software that regulates and monitors hardware and communications protocols, is there to inspect network traffic and all the “packets” of information that pass through to your inner sanctum, your CPU and hard drives. A firewall will rule out the possibility of harm, or at least greatly minimize, by noting and quarantining potentially harmful “zones” and will either deny or permit access to your computer based on the current set of rules that applies at the time, depending on many (very many) factors.

Basic tasks and settings

The basic task for a firewall is to regulate of the flow of traffic between different computer networks that have different “trust levels.” The Internet is full of countless overlapping zones, some safe and some totally deadly. On the other hand, internal networks are more likely to contain a zone or zones that offer a bit more trust. Zones that are in between the two, or are hard to categorize, are sometimes referred to as “perimeter networks” or, in a bit of geek humor, Demilitarized Zones (DMZ).

Without proper configuration, a firewall can simply become another worthless tool. Standard security practices call for a "default-deny" firewall rule, meaning that the only network connections that are allowed are the ones that have been explicitly okayed, after due investigation. Unfortunately, such a setup requires detailed understanding of network applications and a great deal of time and energy to establish and administer.

Who can do what?

Many businesses and individuals lack sufficient computer and network knowledge to set up a default-deny firewall, and will therefore use a riskier but simpler "default-allow" rule, in which all traffic is permitted unless it has been specifically blocked for one of a number of possible reasons. This way of setting up a firewall makes “mysterious” and unplanned network connections possible, and the chance your system may be compromised becomes much more likely.

Firewall technology had its first growth period in the computer technology revolution of the late 1980s, when the Internet was a fairly new in terms of its global reach and connectivity options. The predecessors to today’s hardware/software hybrid firewalls were the routers used in the mid 1980s to physically separate networks from each other. However small the Internet began, it was ultimately undone by supremely fast growth and the lack of security planning, and therefore there were the inevitable breaches caused by older (“prehistoric”) firewall formats. Fortunately, computer pros learn from their errors, and the firewall technology continues improving daily.

The Future of Dedicated Hosting Delivery

The Future of Dedicated Hosting Delivery

For all the hype, over the last few years an increasing number of businesses have started moving not just distribution but more important business processes online in earnest. The main reason this much anticipated migration has dragged its heels is that change takes time, and businesses going online are faced with hurdles of cost, complexity, resourcing, and marketing at every step of the process.

The workhorse in terms of infrastructure of this fundamental change is hosting.

As many businesses now know, hosting has a wide range of options in terms of cost and function, but it's the growth of Dedicated Hosting that has continued to gather momentum over recent years. The most interesting aspect of this growth is that indicators show that most businesses are at the bottom of the adoption curve and that the most aggressive growth is yet to come.

What customers want

What customers have wanted, but more importantly needed, over the past years has changed considerably. As businesses become leaner and headcounts shrink, so priorities and their drivers have changed. So-called "Have-to-haves" or essential requirements are the issues ones getting any traction, relegating "Nice-to-haves" to the back-burner until they either become irrelevant or are escalated for other reasons.

This phenomenon has seen companies spend less time, resources and money on their online presence than they might have.

Priorities have changed.

Issues that have re-prioritised the importance and investment in online presence and tools now include better brand awareness through greater exposure, increased distribution driving higher sales and new markets, and better processes to increase efficiency and reduce costs.

As customers realise that their commitment to their online tools needs to increase, so too does their requirement for effective development. Once the development has been defined and is nearing completion, the tool requires a means of delivery, being effective hosting. Hosting is then divided into two categories: Shared hosting (otherwise known as virtual hosting, as opposed to virtualised hosting) and dedicated hosting.

Dedicated hosting is a requirement once the environment that the developer requires becomes either more complex. or more customised than a vanilla shared hosting environment. In short, custom development requires the freedom that only a dedicated hosting environment can deliver.
How service providers are meeting customers’ needs

Dedicated hosting has traditionally been delivered by Carriers, Internet Service Providers or Hosting Providers. Of these, it has quickly become apparent that hosting, particularly dedicated hosting, is a specialisation requiring specific skills to deliver the required product offerings.

As dedicated hosting growth gathers momentum, so too does the need for fast, cost effective delivery. Until recently, delivering dedicated hosting has meant a long-winded and complex process for both service provider and customer alike, involving specifying and sourcing the right hardware, burn testing, server OS configuration, application configuration, IDC installation and connectivity configuration and finally a handover to the customer to, only then, start the process of final configuration for production rollout.

The process is long-winded, expensive and complex for all parties concerned. Issues continue for dedicated hosting servers set up this way as, when the times to upgrade disk, RAM or even the whole server, the process begins again from the start.

Virtualisation: Not as good as, better.

New virtualization technology is now set to deliver dedicated hosting in a way that not only eliminates most of the complexity for both service provider and customer alike, but introduces many additional virtualised hosting benefits that have not previously existed.

For service providers, it allows scalable, profitable and fast delivery of premium dedicated hosting.

For customers, it eliminates hardware, hardware drivers and hardware upgrades. In addition, due to the features included in some server virtualisation technology, it delivers far higher levels of availability and allows clones of production environments to be created for seamless development and rollout.

Virtualisation and virtualisation

As either a service provider or a customer, it’s important to understand that many different flavours of server virtualisation exist, bringing different price points, levels of resource control and base-OS independence. Apart from resource control and allocation, stability of, and independence from, the underlying OS is essential to realising all the available benefits of server virtualisation technology and quality virtualised hosting.

Of all the current crop of server virtualisation technology, VMware Virtual Infrastructure 3 seems to lead the market against all of the above criteria, combining the highest available resource control with elimination of hardware drivers. Infrastructure 3 also allows intelligent high-availability redistribution of VMs from failed physical servers to the remaining healthy servers in the farm.

Server virtualisation technology is set to expand its market share as it has in the wider server market – it just depends on whether virtualised hosting service providers and customers alike realise the possibilities available for premium virtualised hosting.

All You Wanted to Know About Proxies

All You Wanted to Know About Proxies

Though a common term in the computer age today, it needn’t bring out doubt in users. We all know what a proxy is, something that’s used instead of the real thing, especially when it comes to dubious deals where you send a proxy to carry out an intended act instead of the real person. Similarly in the world of servers, proxy servers are fast gaining popularity. A closed proxy can be accessed by limited people and permits using someone else's computer to conceal your identity and/or location. An open proxy refers to a proxy server that can be accessed by anyone who uses the Internet.

By and large, a proxy server permits users in a particular network group to accumulate and further internet services such as DNS or web pages. Consequently their bandwidth usage is lessened and controlled. In case of any open proxy, however, every user on the Internet can avail of this forwarding service.

The open proxies, also known as anonymous open proxies, allow users to hide their true IP address from the service they access. In fact the use of proxies has at times been intended to abuse or disrupt a service. This obviously includes breaking its terms of service or the law. For such reasons open proxies have been viewed as problematic from time to time. They have been under severe scrutiny because school children, as well as office staff have been known to use it time and again to access restricted sites during work hours.

The use of proxies can work to your advantage as it ensures that no one can take advantage of you in this visual world. This is very true in reference to users who use social forums on a daily basis. Users may be getting onto such sites to gain knowledge, learn about latest developments or also increase their social networking base, but there are always those who are on the prowl to find the vulnerable and easy targets. And just incase you’re targeted your life could be a mess. Your credit card accounts could be tampered with, someone would know all your details and you could be stalked or blackmailed and worse still you’d find yourself visiting sites and forums which you never have. You could even be made the next porn star. For your identity to be protected and enhancement of online security, the use of proxies works just fine. This permits you to access facebook, hotmail, youtube, and others without a worry. These may be popular as fun sites, but they also promote educational needs, helping you learn along the way or even have a secure social networking group you can depend on. Along the way you can make alliances that can assist you in getting a job, help make business or even make a friend. The opportunities are endless, as long as you keep yourself safe from prowlers.

A computer can be run on an open proxy server without the owner knowing this. This is possible because of misconfiguration of proxy software used in a particular computer. Malware, viruses, trojans or worms are all used for this purpose. Open proxies are slow, as slow as 14400 baud (14.4 kbit/s), or even below 300 baud. At times they could keep alternating between fast to slow every minute. PlanetLab proxies are quicker and were deliberately put in place for public use.

Despite the popularity of proxies, because of the controversies surrounding them, there are systems in place that don’t permit their usage everywhere. IRC networks routinely test client systems for identified open proxy types. Mail servers can be configured to routinely check mail senders for proxies with the help of proxycheck software. Mail servers can also consult various DNSBL servers that block spam. Such servers also list open proxies and help in blocking them.

Nevertheless, anonymous open proxies have also been hailed because they enhance ambiguity, in other words security when browsing the web or engaging in other internet services. The use of proxies conceals any users’ true IP address. This is a boon because your IP address can be used against you. It can help miscreants construe information about any user and accordingly hack into his/her computer. Open proxies are being looked upon as the next big thing when it comes to dodge efforts at Internet censorship by governments and concerned organizations. There are various web sites that make available updated lists of open proxies on an ongoing basis.

A History of Voip

A History of Voip

The use of Voip (voice over IP) is increasing rapidly year on year. It is predicted that by the end of 2009 there will be 256 million users of VOIP around the world. The advantages of VOIP in terms of scale, cost and easy of use are now commonly agreed upon. But where did VOIP begin? Who invented VOIP?

The history of Voip extends further back into the world of pre internet that most people would think. The first Voip calls where made as far back as 1973. The capability to send voice across a digital network was pioneered on the ARPANET network, the precursor to the modern Internet. It only carried data and voice between the private network of computers on the APRPANET grid but the seeds for the VOIP revolution where sown by these pioneers.

Voip continued to developed amongst a small cache of computer users who used the technology to communicate with each other in a sort of geeky version of CB radio. Any two computers connected on the same network could use voip technology but there was no widespread adoption of the technology.

The first major step towards the VoIP services that many of use to today was the introduction of the software called "Internet Phone" from a US based company called Vocaltec. The first publicly available of f the shelve internet phone software from Vocaltec was the catalyst for the explosion in VOIP use. The Vocaltec software was able to run on a home PC and utilized much of the same hardware products that Voip services use today in terms of soundcards, speakers and headsets. The Internet Phone software differed from most modern VOIP services in that it used the H.323 protocol instead of the SIP protocol that is more ubiquitous today.

Although Internet Phone was an immediate commercial success it did suffer from a variety of problems. The lack of high speed internet access meant that the quality could be poor and the flow of voice slow. Early voip calls where like using walkie-talkies to communicate in terms of quality of signal. Another issue was the fact that the two computers that where talking to each other needed to have the same soundcards with the same drivers for the software to work. This obviously limited the use of the software and the effectiveness of the process. Much of the transmission was done via modems and was therefore utilizing traditional telephone lines and providing a service that was of a worse quality to that of a normal phone call.

Once Vocaltec had laid the foundations the increase in the use of VOIP was fairly rapid accounting for 1% of all US phone calls by 1998. Other companies began to develop software for the VOIP market and also hardware in the terms of hard phone and network switches. The expansion of broadband also aided the growth of VOIP by increasing the quality of calls and reducing the latency issues that effected VOIP at the beginning. By the year 2000 VOIP calls in the US where about 3% of the total.

The popularity of VOIP has increased since the turn of the millennium and with free VOIP provider Skype currently having registered a staggering 400 million user accounts at the end of 2008. With the growing availability of VOIP services for mobile phones it looks as if the adoption of VOIP will continue to expand rapidly.

How to Connect Two Computers to One Broadband Modem to Use the Internet

How to Connect Two Computers to One Broadband Modem to Use the Internet

Sharing an internet connection between two computers in your home is very easy with all this new technology. If you own a broadband modem or one that is supplied from your internet service provider you can most probably have two computers connected to it without needing a router. This can save you money by not having to buy that extra network equipment.

Quick tips on how to connect two computers to one modem:

If you look at the back of your modem, it may have a USB (blue) and a ETHERNET (yellow) port. See a detailed picture how to connect two computers to one modem

**My service provider, Telstra Australia, supplied my modem to me. It came with an installation disc, which we will need to use to connect to the modem through the USB.
**If you have a similar modem to what I have described, you can connect two computers off the one broadband modem and thus both accessing the internet.
One can connect through the USB port and the other through the Ethernet.
**A USB cable and and Ethernet cable should come with the modem, however to connect the second computer you may need to buy a longer Ethernet cable as you do not want the two computers sitting right next to each other. Get a blue cable.. NOT red.
**To connect a computer to a broadband modem through USB, it is recommended that you use the installation cd. It will ask you to choose how you want to connect and you select USB. Make sure you know your user name and password for your internet service provider.
**To connect the second computer to the same modem but through the Ethernet, just plug it in...It usually works straight away. If it does not, use the installation cd, but choose to connect with Ethernet instead of USB.

The Fundamentals of Fiber Optic Cable Management

The Fundamentals of Fiber Optic Cable Management

Fiber optic cables are used frequently for today’s telecommunication network because of their high bandwidth, high reliability and relatively low cost. To maximize the network performance, a good cable fiber management system must be in place.

There are four fundamental principles for a good fiber cable management system:

1. Bend radius reduction

Fiber bends beyond the specified minimum bending radius can cause signal loss or even break the fiber, causing service disruption.

Today, industry standards for traditional singlemode jumpers typically specify a minimum bend radius of

ten times the outside diameter of the jacketed cable or 1.5" (38 mm), whichever is greater. This new

breed of flexible singlemode optical fiber has the potential to significantly reduce these minimum bend

radius requirements to values as low as 0.6" (15 mm), depending on the cable configuration, without increasing attenuation.

A reduced bend radius fiber is able to withstand tighter bends within frames, panels and pathways. It also enhances the reliability of a network and reduces network down time.

2. Well defined cable routing paths

The major reason of optical fiber cable minimum bend radius violation is improper routing of fibers by fiber installation technicians.

Routing paths should be clearly pre-defined and easy to follow. In fact, these paths should be designed so that the technician has no other option than to route the cables properly. If an option is given to technician, inconsistent human decision could cause improper routing, and causes bend radius violation.  Well defined routing paths can standardize fiber optic installation process, and less training time is required for fiber technicians.

3. Easy access to installed optical fibers

Allowing easy access to installed fiber cable is essential for maintaining proper bend radius protection. The system should be designed to ensure that individual fibers can be installed or removed easily without negative effects on nearby fiber cable.

4. Physical protection of installed optical fibers

The management system must provide measures to physically protect fiber cables from accidental damage by technicians and equipments. Otherwise, the network reliability and performance will be adversely affected.

Internet Access Types

Internet Access Types

DSL

Digital Subscribers Lines is an advanced technology for bringing high speed internet connection to the home and corporate users. DSL doesn’t require the new wiring because it can be used on the regular telephone lines. With DSL you can use your internet connect and use telephone for making phone calls at the same time.

ADSL

ADSL (Asynchronous Digital Subscribers Lines) is a high speed internet connection that is used to send and receive data at very high speed over the conventional telephone lines. ADSL supports data rate of 1.5 MBPS to 9 MBPS when receiving data (downstream) and 16 to 440 KBPS when sending data over internet (upstream)

Cable Net

Cable modems are copper wire are used to high speed access to the internet. Coaxial cable is used by the TV provides much greater bandwidth than the regular telephone lines. Cable modem provides the broadband internet access. Cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking with some modifications. Some cable modem devices use Router to provide local area network with its own IP addressing. Some of the major manufacturers of the cable modem are Cisco, D-Link, Linksys, Motorola, Ericsson, Nortel Networks and 3Com.

Dial Up

Dial up communication is a type of internet access that works on the regular telephone lines. The computer is granted internet access by connecting the telephone line with the modem in the computer and configuring the computer with user name password and dial up numbers provided by the local ISP. Dial up service is least expensive but also provide the lowest internet speed. The dial up connection can be used with two types of modems internet modem and external modem.

GPRS

GPRS General Packet Radio Service is a series of functionalities that allow mobile data streaming and transfer to users of Global System. GPRS also called as 2.5 G. GPRS allows multiple users to share communication channel. GPRS facilitates the functionalities of web browsing, SMS, multimedia messages and real time email reception etc.

WiMAX

WiMAX stands for worldwide interoperability for Microwave access. WiMAX provides very high speed broadband internet connection to the home users, corporate users and the roaming users over wireless connection. WiMAX allows the data, voice and video communication at the same time. WiMAX connection can also be bridged and routed with the wired or wireless LAN. WiMAX provides data rate up to 70 mbps.

Satellite Internet access

Satellite Internet services are used in the locations where terrestrial internet access is not available. Satellite broadband is linked to the dish network subscriber service and provides data communication speed at the same rate of other broadband technologies. Two way satellite internet consists of two foot by three foot dish, two modems for uplink and downlink and coaxial cable between dish and modem.

Virtual Private Network (VPN) Technology

Virtual Private Network (VPN) Technology

The proliferation of network users, accessibility, flexibility, and cost effectiveness of Wide Area Network (WAN)/Internet connections have increased the need for affordable and secure communications. Virtual Private Network (VPN) technology has become a preferred technology due to the security levels it provides during transmission of data.

VPN Networks are primarily extended private networks comprising links across a shared public telecommunication infrastructure such as the Internet. In a VPN system, data is transmitted between two computers over the public network emulating a point-to-point link. Data packets are encrypted at the sending end and decrypted at the receiving end. Due to the encryption and authentication of IP packets sent over VPN networks, the data even if intercepted, is impossible for hackers to decipher without the encryption keys. VPN technologies maintain security and privacy through using tunneling protocols and security procedures. In VPN networking, VPN LAN can take up various forms by combining different hardware and software technologies. VPN LANs are those connections between a remote local area network (LAN) and a private network. VPN systems work in multiple environments and related technology to render secure solutions.

The tunneling protocol, also called Encapsulation protocol is a network technology that includes establishing and maintaining a logical network connection. The most widely used VPN tunneling protocols are, Layer Two Tunneling Protocol (L2TP), IP Security (IP Sec), Point-to-Point Tunneling Protocol (PPTP), Secure Sockets Layer/Transport Layer Security (SSL/TS), Open VPN, Layer 2 Tunneling Protocol version 3 (L2TPv3), VPN Quarantine, and Multi Path Virtual Private Network (MPVPN). VPN technology supports two types of tunneling – voluntary tunneling, where the VPN connection set up is managed by VPN client, and compulsory tunneling, where the VPN connection set up is managed by the network provider. In tunneling, data packets are encapsulated within IP packets and then transmitted across the Internet. On reception of data at the receiving network end, the encapsulated packet is stripped from the IP packet in order to obtain the original message packet.

Trusted VPNs and Secure VPNs are two major VPN technologies that secure and improve VPN performance. While Secure VPNs utilize cryptographic tunneling, trusted VPN networks depend only on the single provider’s network traffic to protect data. Trusted VPNs comprise Multi-Protocol Label Switching (MPLS), a technology that is frequently used to overlay VPNs with Quality of Service (QoS) across a trusted delivery network, and Layer 2 tunneling protocol, which takes on the characteristics of two proprietary VPN protocols.

Cryptographic tunneling protocols are used by secure VPNs to provide privacy to networks through encryption, authentication, and message integrity. In this advanced technique, there are options to block snooping through packet sniffing, block spoofing of identity and altering of messages. By implementing and operating the right secure VPN protocols it is possible to provide secure communications over insecure networks and considerably improve VPN performance.

The popular VPN tunneling protocols are Internet Protocol Security (IPSec), ?Point-to-Point Tunneling Protocol (PPTP), and ?Layer2 Tunneling Protocol (L2TP). Internet Protocol Security (IPSec) is a widely used and standardized VPN protocol that is most preferred due to its interoperability benefits. IPSec is an open standards framework consisting secure protocol suite that can be run on an existing IP connection. This VPN protocol operates at layer 3 of the OSI model. It provides data authentication and encryption, and can be implemented on any device communicating over IP. IPSec protects all data traffic carried over by IP. It also provides encryption and authentication for non-IP traffic by concurrently operating with Layer 2 tunneling protocols. The three major components incorporated in IPSec are: Authentication Header (AH), Encapsulating Security Payload (ESP), and Internet Key Exchange (IKE). The authentication header that is added after the IP header provides authentication at the packet level, and ensures that the data packets are not meddled with along the route. ESP gives confidentiality and authentication of data origin.

Point-to-Point Tunneling Protocol (PPTP) is Microsoft’s proprietary development that is used in VPN Networking and communications. It authenticates users through employing authentication protocols (MS-CHAP, CHAP, SPAP, and PAP). Although PPTP has ease of use, it is still not very flexible solution and is not interoperable like other VPN protocols. The communication types of PPTP are: PPTP connection (PPP link is established to an ISP by the client), PPTP control connection (PPTP connection to the server is created by the user), and PPTP data tunnel (communication is exchanged between client and server within an encrypted tunnel). PPTP is generally employed to secure communication channels between many Windows hosts on the internal network.

The Layer 2 Tunneling Protocol (L2TP) tunnels Point-to-Point protocol (PPP) across a public IP network. It operates on layer 2, enabling non-IP protocols to be transported through the VPN tunnel and also works on Layer 2 components such as ATM, frame relay, etc. L2TP can provide encryption service in conjunction with other protocols or encryption mechanisms.

Technological advancements have made businesses to look for enhancements to secure their networks and business communications. In the line of VPN technology, there is an influx of VPN products that occupy the marketplace. Customer compare VPN products based on functionality and flexibility and employ the best of technology. VPN comparison or even comparison of technological products would open a wide array of choices to select. It depends on the set of requirements of the customer to match with the appropriate technology gadgets for effective use.

The Nanotube Computer

The Nanotube Computer

The nano future is emerging through the haze of hype: the road to terabit memory and cheap flat-screen displays will be paved with carbon nanotubes.

In the hype-filled world of nanotechnology, Phaedon Avouris, head of IBM Research's nanoscience and technology group, has a reputation as a meticulous and somewhat skeptical scientist. By his own description, he is one of those researchers whom reporters call to get a "realistic assessment" of the latest nanotech breakthrough. These days, though, the IBM chemist sounds uncharacteristically upbeat.

The reason for his excitement can be seen in a microscopic image recently produced in his lab. It shows a thin thread draped over several thick gold electrodes. What is not so apparent is that the thread, a single carbon nanotube, has been modified and positioned so that it forms two types of transistors, each a few nanometers (billionths of a meter) in diameter- a hundred times smaller than the transistors now found on computer chips. What's more, the nanotube transistors work together as a logic gate, the fundamental computer component responsible for selectively routing electrical signals, transforming them into meaningful ones and zeroes.

New Technology Now Lets Over 100,000 Students Remotely Access Their School

New Technology Now Lets Over 100,000 Students Remotely Access Their School

Having graduated university in 2002 I can still remember the good old days when I used to sign up for classes over the phone. I had to pick up a course catalogue wait for my registration time and then punch in all of the numbers for the classes I wanted. Those days are no more thanks to LinkProof a piece of network hardware that allows students to remote access their schools. LinkProof is made by Radware a company specializing in network front end application hardware.

Why do students need remote access?

Students can now sign up for classes from where ever they can get an internet signal. They do it form home, a coffee shop with WiFi, even from a different country if they are spending a semester abroad. LinkProof also act as a security shield for school applications protecting the school's programs and hardware from viruses, Trojans, and denial of service attacks.

Students who can remote access their schools can also get extra information passed on by teachers and professors. They can see class locations and most importantly check their test scores and grades.

Is remote access just for universities?

Absolutely not, remote access is currently being used by elementary schools as well. The Education Bureau of Yi-Lan County in Taiwan recently used Radware technology to assure remote access for its school system.

Teachers can correct work from home and remote access their grading programs remotely. Parents of students can check on their child's progress and see if they have missing assignments. They can also check their child attendance records to see if their children are skipping classes.

Where is remote access being used?

Remote access is being used all over the world. Keimyung University in Korea built a campus-wide 10-gigabit high-speed backbone network to accommodate more than 27,000 students and 90 departments in nine faculties and 19 colleges. As a result, users have a continuous and secure connection to university resources. Concordia University in Wisconsin is one of 10 universities in the Concordia University System responsible for ensuring the availability and optimized performance of all email, Web access and administrative systems for more than 5,200 students. Remote access with Radware technology lets them do it all.

When looking for school, check to see what remote access capabilities they offer. For both parents and students remote access is a valuable tool used at all levels in the education system.

Cloud Computing and Security Issues

Cloud Computing and Security Issues

Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them. Cloud computing is a method of delivering hosted services -- Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – over the Internet in a fast, cost-effective way. The technology has gained popularity in a weakened economy as enterprises seek ways to save money, but as always, this emerging technology presents certain risks, and it could open an organization to security vulnerabilities and threats.

    The concept generally incorporates combinations of  infrastructure as a service (IaaS) , platform as a service (PaaS), software as a service (SaaS) . Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. Cloud computing can be confused with  Grid computing  which is a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks and  Utility computing – the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity

Characteristics

    Cloud computing customers do not generally own the physical infrastructure serving as host to the software platform in question. Instead, they avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, while others bill on a subscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. Additionally, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sites.

Companies

     Vmware, Sun Microsystems, IBM, Amazon, Google, Microsoft and Yahoo are some of the major cloud computing service providers. Cloud services are being adopted by individual users through large enterprises including vmware, General Electric and Procter & Gamble.

Cloud computing types

Public cloud

     Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

Hybrid cloud

     A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".

Private cloud

     Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualization automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns. They have been criticised on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".

Cloud Computing and Security Issues

     The benefits of virtualization and cloud computing are transforming the way we look at IT outsourcing for development, testing, and production.  Existing skills, processes, and projects seem to translate naturally to a virtualized environment, and few obstacles seem to impede the adoption of the cloud model for production.  Practitioners and the media alike have touted the potential security issues of virtualization.  The cloud brings with it a layer of additional security considerations, in terms of both technology and process.

     This layer of additional security isn’t necessarily scary or complicated.  But right now, trust in the security of cloud computing is the number one impediment to its growth.  This article takes a look at the cloud from various points of view.  I will compare real-world examples to look at security implications of the Cloud, and show how they integrate with traditional security processes.

Setting Up a Small Business Computer Network

Setting Up a Small Business Computer Network
A computer network setup is an integral part of every organization today. The linkage of the different computers in the office space helps in the growth of the business by saving time and money. It speeds up the work pace and invariably saves a lot of time. Each office would have its own specific needs. One can approach network consulting firms with their requirements and have a computer network setup that caters specifically to their needs. Your computer must be used to its full potential and all your needs should be met with a good computer network system in place.

An effective computer network would mean a good network design to begin with. The network consulting firms can also suggest the best possible network design as per the nature of the job of the enterprise and the role of the computer network system in the work area. There are various kinds of computer networks like the LAN (Local area network) to WAN (Wide Area Network) and various other types. Each has its own particular role and use and the right one can be chosen depending upon the field of functioning. One can even have a computer network setup that is wireless. A wireless system is a more practical solution in recent times as people can be connected and still have the option of roaming freely with their laptops. A remote access arrangement is also a good networking option. In case of the traditional wired network one should be certain that the cabling done is of superior quality or else the problems that you may end up with are plenty. A system integration firm would be the best option to setup a good network in the office space.
The system integration firm will help integrate a new setup with the already existing setup in the organization. This would help save money for the company as you may not need to do away with the old network as the professionals would be able to find a way of combining the new technology with the already existing one. Apart from the different systems existing in the company it can all be brought together under one umbrella and this would help in extracting the complete picture all at once. All the subsystems are held as one and the integration process makes it as one.

A computer network that is setup with the aid of network consulting agents, using their expertise in putting up a network system in place helps in making the operation of the company smoother. An effective system integration process adds to the benefits of the computer network. All this put together triggers the growth process of the company. There are various firms offering services of network design and system integration. Do a proper study before hiring the services of any. Since this involves thorough knowledge and expertise of all the systems in place, you have to be certain that the job is assigned to a competent firm with sufficient experience and knowledge behind them.

File sharing over the WAN - Storage Networking

File sharing over the WAN - Storage Networking

File sharing over local area networks (LANs) has become an integral part of enterprise computing. Over 20 years ago, personal computer users realized the benefits of abandoning "sneakernet" by connecting individual computers into a network. This allowed more efficient file sharing and project collaboration for members of a workgroup who were only a cable length away, typically in the same room or building. Now there is a desire for global organizations to share file data in real time among all their locations.

These global organizations now wish to implement wide area network (WAN) based storage consolidation, to reduce storage resource management costs while also enabling remote employees to collaborate on projects just as if they were all on a local network. Given the current state of technology, however, enterprises are forced to limit the deployment of real time file sharing and networks to within each LAN, resulting in disconnected islands of file sharing throughout the enterprise.

Corporations have tried to fix this problem in various ways. For large files or large groups of files, companies may file transfer protocol (FTP) the files overnight. For ad hoc collaboration between employees, people might attach files to email messages. While these solutions are marginally acceptable, they each suffer from the same problem. The file sharing is not in real time with everyone working off the same version of the data. Rather than sharing a single copy among many users, these methods duplicate files and propagate private copies to each user. The moment a file is emailed or FTP'ed, two or more potentially out of sync copies are created. The dissimilar versions need to be manually reconciled, usually after every set of revisions by the people working on the project.

Facebook - the Complete Biography

Facebook - the Complete Biography

Facebook is the second largest social network on the web, behind only MySpace in terms of traffic. Primarily focused on high school to college students, Facebook has been gaining market share, and more significantly a supportive user base. Since their launch in February 2004, they’ve been able to obtain over 8 million users in the U.S. alone and expand worldwide to 7 other English-speaking countries, with more to follow. A growing phenomenon, let’s discover Facebook.
Facebook’s shares have already gone up significantly since the company’s early days. According to filings with the California Department of Corporations, Facebook’s common shares were priced at 78 cents in January of 2006. By May of that year, they had jumped to $8.05. In August of this year, they were $6.61. Incorporation filings in Delaware show that Facebook split its shares 4-to-1 in July of 2006 so on a split-adjusted basis the shares were priced at 19.5 cents in January 2006. That means Facebook’s sense of its own worth has risen by more than 33-fold in less than two years. Facebook hasn’t filed the price of its common shares following the Microsoft investment. But on October 18, if filed with the State of Delaware to split its stock four for one again.

History

Facebook is a social networking website that allows people to communicate with their friends and exchange information. Launched on February 4, 2004, Facebook was founded by Mark Zuckerberg, a former member of the Harvard Class of 2006 and former Ardsley High School student. Within months, Facebook and its core idea spread across the dorm rooms of Harvard where it was very well received. Soon enough, it was extended to Stanford and Yale where, like Harvard, it was widely endorsed. Before he knew it, Mark Zuckerberg was joined by two other fellow Harvard-students - Dustin Moskovitz and Chris Hughes - to help him grow the site to the next level. Only months later when it was officially a national student network phenomenon, Zuckerberg and Moskovitz dropped out of Harvard to pursue their dreams and run Facebook full time. In August 2005, the Facebook was officially called Facebook and the domain facebook.com was purchased for a reported $200,000.

Availability

Unlike its competitors MySpace, Friendster, Xanga, hi5, Bebo, and others, Facebook isn’t available to everyone — which explains its relatively low user count. Currently, users must be members of one of the 30,000+ recognized schools, colleges, universities, organizations, and companies within the U.S, Canada, and other English-speaking nations. This generally involves having a valid e-mail ID with the associated institution.

Business & Funding

Given the situation other social networks on the web are facing, Facebook is in a good position financially. While it hasn’t managed to get acquired like its rival MySpace (despite some rumors about an $800m deal with Viacom), it’s been quite lucky in most aspects. For its initial funding, it received $500,000 from Peter Theil, co-founder of PayPal. A few months later, it was also able to get $13 million from Accel Partners, who are also investors in 15 other Web 2.0 startups, and $25 million from Greylock Partners, making their overall venture equal to approximately $40 million.

The Future

Facebook is a massively successful social networking service that grew to prominence in virtually no time. It’s not hard to see why: its features and tools are highly appealing, and Facebook users are extremely well networked in real life. Rumors of an acquisition continue to circulate, with some estimates putting the price in the billions of dollars. In the short term, however, Facebook plans to go it alone, continuing to build out one of the world’s most successful social networks.

What is Home Wireless Networking?

What is Home Wireless Networking?

Home wireless networking is just what it sounds like -- a way of creating networks without any wires within your home! If this sounds exciting to you, then read on.

With a home wireless network, you can create radio connections between computers that let them communicate and connect to the Internet without you having to go to all the trouble of connecting them with wires. The computers don't even need to have a clear path for the signal, as the wireless signal can go through walls and between floors easily.

Where Did It Come From?

The story of wireless networking is a rather strange one. It is basically an application of a technology called frequency hopping which was, believe it or not, invented by the actress Hedy Lamarr and a musician named George Antheil, back in the 1940s. Seriously, do a web search -- I promise I'm not pulling your leg here.

They received a patent for their invention, which was intended to help in the war effort.  Hedy was Jewish, but had been made to hide it and socialise with Hitler as a young woman -- she had to drug her husband and run away to London to escape her native Austria. The importance of what they'd done, however, wasn't recognised until many years later.

The U.S. military adopted the technique in the '60s, using it during the Cuban Missile Crisis. Hedy never saw any money from it as the patent had expired (don't worry, she was a film star!), but she was given a Pioneer Award by the Electronic Frontier Foundation in 1997, three years before her death.

Wireless Network at Home.

When most people talk about wireless networks, they are talking about wireless LANs (local area networks). A local area network doesn't mean that it covers your whole neighbourhood -- the 'local area' in question can be only one building, such as your house. So if you want wireless networking in your home, you want a home wireless LAN.

Once people have wireless in their home, they always seem to act as if there's been an absolute miracle. After years of drilling holes in the walls and running wires all over the place, suddenly seeing them gone is really amazing.

Network Management & Server Management

Network Management & Server Management

1) What Is Network Management?

Network management means different things to different people. In some cases, it involves a solitary network consultant monitoring network activity with an outdated protocol analyzer. In other cases, network management involves a distributed database, auto polling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks.

The ISO network management model's five functional areas are listed below.

    Fault Management—Detect, isolate, notify, and correct faults encountered in the network.
    Configuration Management—Configuration aspects of network devices such as configuration file management, inventory management, and software management.
    Performance Management—Monitor and measure various aspects of performance so that overall performance can be maintained at an acceptable level.
    Security Management—Provide access to network devices and corporate resources to authorized individuals.
    Accounting Management—Usage information of network resources.

Any number or combination of network management disciplines can be outsourced including:

        helpdesk and break-fix services
        fault management
        configuration management
        change management
        asset management
        performance management
        service level management


Your computers and networks need regular care to perform at their optimal level. As your IT manager, Northwest Computer Support monitors your computers and network health on a daily basis. Having this information allows us to proactively maintain your network and provide strategic guidance before trouble occurs.
2) What is Server Management? Server management is the maintenance and operation of a server. While this can mean many things, the main idea behind server management is uptime. The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose.

The whole purpose of a server is to have a reliable resource for users to interact with. Management of a server can vary depending on the size of the server and its purpose. Different types of servers require different types of management plans.

Types of Server Management

        Server Monitoring
        Asset Management
        Case Management
        License Management
        Server Optimization
        Backup Monitoring

What is a Peer To Peer Network?

What is a Peer To Peer Network?

There are lots of different uses for a p2p network, and mostly they're good uses.  At lot of people are looking for free mp3 downloads or trying to get pirated or copyrighted movies or music for free.  Probably the most famous examples of this was the Napster network.  Whether it was the first p2p network or not is debatable, but it was definitely the most popular and widley known about p2p network.  Napster basically brought the concept of illegal sharing of mp3 music to the mainstream.

p2p networks have several characteristics which make it a great network configuration, but also make it attractive to people who are looking to use it for less than admirable reasons.  One of those characteristics is that a p2p network is what you would call 'distributed.'  A distributed network is one where your information, data, or hosts and nodes in the network are not centrally located.  They're not all in one place; they're spread all over, or geographically disperse.  So what?  What makes disperse nodes in a network a good thing?  There are several reasons.  First, you can tell your node in the network to look for your information at the closest available node with the data you want.  That way, you ensure the fastest possible transfer rate and lowest latency.  It also means a p2p network is resilient to outages.  One node goes down, get your download somewhere else.

Now to really complicate things you can break apart large files into smaller pieces or fragments.  This is beneficial because then you can simultaneously download multiple parts of the same file!  Now you're really smoking with your download speeds.  If you have a bunch of stuff to share or your getting free mp3 downloads from the internet, using a p2p network works really well.

There is one final characteristic about a p2p network that makes it good for illegal file sharing or downloading mp3s or movies and stuff.  Anonymity.  In most p2p networks, data is not stored centrally, this means there's no 'big brother' looking over all the traffic between nodes in the network.  In some p2p networks each client maintains a small database which lists the available files and the nodes where those files are available.  So each client or node will just connect directly to the clients that have the files they want.  Who's going to monitor that and say you're not allowed to do that?  Or to check and see that the files you're sharing aren't protected by copyright or something?  It's very difficult to monitor and control p2p networks.