Monday, March 25, 2013

Computer Fundamentals



                  Lyceum of Alabang
                               KM 30Tunasan, Muntinlupa City







Research Project
(Computer Hardware Fundamentals)







John Jovi Bon
BSCPE 12M1








Mr. Rodrigo Calapan









Table of Contents


I. Occupational Health & Safety Standards

II. Computer Fundamental

III. Computer Software

IV. Operating System

V. Technical People

VI. Electricity & Power Supply

VII. Parts of the Computer
             • Motherboard Parts & Functions

             • Hard Disk Drive Parts & Functions

             • Floppy Disk Drive Parts & Functions

             • USB Flash Drive Parts & Functions

             • Compact Disk Parts & Functions

             • Digital Versatile Disk

             • Blu-ray Disk

VIII. CMOS Manipulation

IX. Beep Codes & Error Messages

X. Network Cabling

XI. Computer Networking

XII. Network Topology

XIII. Data/Resource Sharing

XIV. Latest Gadgets

XV. References



 I.  Occupational Health and Safety Standards


1. Remove your watch and other jewelry and secure loose clothing.

2. Turn off the power and unplug the equipment before opening the case.

3. Cover any sharp objects inside the system case with tape.

4. Never open a monitor or a power supply.

5. Do not touch areas in printer that are hot, or use high voltage.

6. Know where the fire extinguisher is located and how to use it.

7. Know where the first aid kit is located.

8. Keep your foods and beverages out of your work space.

9. Keep your work space clean and free from clutter.

10. Bend your knees when lifting heavy object to avoid back injury.






II.  Computer Fundamental

HISTORY & GENERATIONS
Ever since the man thought of counting, he developed the concept of communications. His initial approach to accounting and data computations and their recording was with help of sticks, pebbles or lines on walls of caves.Then he moved towards counting using ten fingers of his hands, which probably is the basis of present decimal system
The earliest computing device, which was used by Egyptians as early as 450 B.C., is ABACUS. The Chinese version of ABACUS was a bead on wires counting frame, which is still much is use in south east Asia, China and Japan.
The first desktop calculator machine, which is capable to perform various arithmetic operations, was developed as early as 1642, which was pioneered by French scientist BLAISE PASCAL (1623-1662). This calculating machine mainly consisted of gears and wheels for calculations and this machine could perform only two basic operations i.e. addition and subtraction.
A German mathematician GOTTFRIED LEIBNITZ worked on improving this machine i.e. Pascal's calculator for performing four basic arithmetic operations (+,-,x,/).
Charles Babbage designed the early computer called difference engine in the year 1822. Which could produce reliable tables. He improved this machine and came out with a new idea of Analytical Engine in 1833, which could perform the basic arithmetic functions, which is intended to be completely automatic. This machine used punch cards as input output devices for basic input and output. He is called as "FATHER OF COMPUTERS".
In 1920, LEONARDO TORES demonstrated a digital calculating machine in Paris.
The concept of punched cards which was used by BABBAGE as I/O media, was developed further by HARMAN HOLLERITH in the year 1889. He is the founder of present IBM (International Business Machine) company.
As the demand for punched cards machine increased, there was inadequacy of these machine for scientific computations and this demand led to the development of electro mechanical calculators known as MARK-1, which was the first automatic general purpose digital computer which was able to do three additions per second, for multiplication it took about four seconds and about if took Aiken eleven seconds for division. This machine was designed by Prof. HOWARD AIKEN of HardwardUniversity. This was in 1944.
The first electronic computer, ENIAC (Electronic Numerical Integrator And Calculator) was designed in 1946. It has capability to perform about 5,000 calculations per second. This was a huge computer which occupied about 1,500 sq.ft and weighed about 50 tons.
After ENIAC the next development was an electronic computer which was based on JOHN VON NEUMANN'S concept of stored program named as EDVAC (Electronic Discreate Variable Automatic Computer) and this was in 1949.
Almost simultaneously with EDVAC of U.S.A, the EDSAC (Electronic Delay Automatic Calculator) was developed by British scientists. This machine was capable to do mathematical operations which are executed in matter of a few micro seconds.
Then came in 1951 the commercial version of stored program computer UNIVAC (Universal Automatic Computer), which was first digital computer.

GENERATIONS
The development of computers has followed difference steps in the technology used and these steps of technological differences are called as generations.
FIRST GENERATION (1945-1960):
The first generation of computer were those computers which use Vacuum Tubes or Valves technology. Almost all the early computer like ENIAC, EDVAC, EDSAC etc. were made a reality only by the invention of vacuum tube, which is a fragile glass device that can control and amplify electronic signals. In this computer they are using 18,000
Vacuum tubes, 70,000 resisters, 10,000 capacitors and 60,000 switches. It took 150 kilo watt electric power and it produce large amount of heat. They were bulky and required large space. They had small primitive memories and no auxiliary storage.
SECOND GENERATION (1960-1965):
With the development of transistors and their use in circuits, magnetic core for memory storage, the vacuum tubes of first generation are replaced by transistors to arrive at second generation of computers. The size of transistors is much smaller when compared to vacuum tubes. They consumed less power generated less heat and are faster and reliable. William B Shickley, John Burdeen and Walter H Brattain are the scientists develop the transistors. They are working bell telephone, U.S.A.They got noble prize. The major advantage use of transistors was that the size of computer has come down as well as the power consumption. Even the cost of transistors is less in comparison with the cost of vacuum tubes, the cost of computer reduced drastically, they were more reliable then first generation computers. Fortran, cobol, snowbal, algol etc. like high level languages are developed in this generation. In this generation they are using magnetic tapes for storing.
THIRD GENERATION (1965-1975):
With the development of silicon chips. The third generation of computers came into existence. These computers used compact integrated circuits (IC's) of silicon chips in place of transistors. Each of these IC's consisted of large number of chips in very small packages. With these IC's coming into picture the size of computers, cost, heat generation and power consumption decreased to a great extent, speed and reliability increased as compared to previous generations. These machines used IC's with LSI (Large Scale Integration).
FOURTH GENERATION (FROM 1975):
The computers belonging to these generation used Integrated Circuits with VLSI (Very Large Scale Integration). These computers have high processing powers, low maintenance, high reliability and very low power consumption. These computer reduces the cost as well as the size of the computer.
FIFTH GENERATION:
These computers use optic fiber technology to handle Artificial Intelligence, expert systems, robotics etc. These computers have very high processing speeds and are more reliable.


III.  Computer Hardware

Computer hardware equals the collection of physical elements that comprise a computer system. Computer hardware refers to the physical parts or components of a computer such as monitor, keyboard, Computer data storage, hard drive disk, mouse, printers, CPU (graphic cards, sound cards, memory, motherboard and chips), etc all of which are physical objects that you can actually touch. In contrast, software is untouchable. Software exists as ideas, application, concepts, and symbols, but it has no substance. A combination of hardware and software forms a usable computing system.

Desktop
A desktop computer typically refers to one where the processor and the storage drives are contained in a big box, while the keyboard and monitor plug in separately. These machines will generally have several advantages.
FIGURE 1 A Desktop Computer is a cost-effective way to get a lot of storage and processing power

·         The performance-per-cost will generally be better than smaller computers, this makes them faster.
·         Desktop computers typically use 3.5 inch hard drives, which generally offer more capacity, at higher speeds, for less money than laptop-sized drives.
·         Desktop computers often allow the installation of more than one hard drive. This can be critical for digital photographers who need to store lots of image files.
·         Desktop computers often come with faster processors or processors with more cores. This can significantly increase the speed of your imaging activities.
·         Desktop computers often provide for 'expansion' capabilities. Additional 'cards' can be installed that improve image processing, add new storage connections, or offer sound and video capabilities.
Laptop
Within the last few years, laptop computers have become fast and capable enough to be the primary image-processing computer for the digital photographer. They also offer great portability, which can be important for the location photographer. 
FIGURE 2 A Laptop computer has all the components in one portable enclosure
Laptop Limitations
Compared to a desktop computer, a laptop will have certain limitations. These can make your work slower, as well as limiting the storage capacity of the computer. All of these limitations are compared to a desktop computer of the same cost and date of manufacture. Although these are generally important limitations, they may be much less important than the gain in portability that a laptop offers.
·         The processor will generally be slower, and the number of cores may be more limited
·         The hard drive will be smaller in capacity, and might be slower
·         Most laptops only allow the installation of one hard drive
·         The monitor screen will probably have a smaller color gamut, limiting the range of colors it can display
·         The RAM will generally not be as expandable
·         The video card will not be as fast, and probably will not be upgradable

Netbook
In the last few years, a new type of laptop computer has appeared: the netbook. Netbook computers are small versions of laptops, designed primarily to be used to surf the internet.  It may be possible to do some imaging work with a netbook, but running software like Photoshop or Lightroom may not work. Outlined below are the typical drawbacks of a netbook compared to a conventional laptop.
FIGURE 3 Netbooks are tiny laptop computers that are optimized for connecting to the internet
·         The processor will be low-powered (low speed as well) and will typically be limited to a single core
·         The RAM will be quite limited, often allowing no more than 1 Gigabyte
·         The monitor screen will typically be very small - sometimes too small for the minimum window size of a software application
·         Some netbooks allow the use of a regular laptop hard drive, and these would be able to have upgraded storage capacity.  Others will use flash memory for storage and will not allow for an upgrade
·         Netbooks will have limited connection ports - often limited to a single USB port
Tablets
Tablet computers are simple low-power computers like the Apple iPad. At the moment, tablets are pretty limited in what they can offer as far as a digital photography imaging computer.  Like a netbook, a tablet is optimized for use in conjunction with the internet, and is designed for low power draw. As such, they are typically more useful as a presentation tool, or for use in conjunction with another computer than as a stand-alone imaging device. Like all of computing, however, they will only get cheaper and more powerful as time goes by. 
Many companies believe that the future of publishing will be tied to the use of tablet computers. A tablet is a great tool for the consumer to use to access electronic publications, particularly ones that are a mixture of text, photos and motion imagery.



Processor
The processor is the part of the computer that does the actual computing. The speed that the computer can run an operation - such as sharpening an image in Photoshop - is largely determined by how fast the processor can make calculations. There are three parts to determining how fast a processor can do its work: clock speed, number of cores, and chip generation.
Clock Speed
Every processor has a speed rating, currently measured in Gigahertz or GHz. The higher the number, the faster it runs.  In theory, a processor that is 2GHz will be twice as fast as a 1 GHz . Note that this relationship is only true for processors from the same generation (more on that below.)
Number of Cores
A core refers to part of the processor that actually does the calculations. One way that computer chip makers have increased the speed is by adding additional cores. A dual core processor can, in theory, run operations twice as fast as a single core processor of the same design and clock speed. Making use of multiple cores is, however, not as clear as you might think. In order for a dual core chip to be twice as fast, the software (such as Photoshop) needs to split the computing tasks into two even streams. In most cases, this is not particularly efficient, so you do not see nearly the speed increase that multiple cores might suggest.
Multiple cores can make some computing tasks go quickly, and for others, there is no speed increase at all. In many cases, both clock speed and number of cores is less important than the chip generation.
Chip Generation
Every few years, the companies that make processor chips will redesign the entire chip architecture to make them faster. Sometimes the clock speed of the newer chips will be slower, even as the real-world speed of the chip increases.
Intel's Core 2 Duo chips that run at 3 GHz, for instance, will run Photoshop slower than an i7 chip running at 2 GHz, because the i7 is a newer generation.
There are 2 main chip manufacturers, Intel and AMD. If you want to find out what the latest generation of processor is from each company, you can click on their names and read up on Wikipedia. As of April 2011 when this is written, the latest Intel chips are the i5 and i7 models, and the latest AMD chips are called Phenom II.


RAM

Random Access Memory (RAM) is a critical performance component for computers. When a computer boots up, opens a programme, and then opens a file, it loads these elements into RAM to do the work. When your computer opens an image file in Photoshop and loads it into RAM, it has quick access to the image data. When you do something to the file in Photoshop, it can make the change quickly, since it already has all the information nearby.  
In general, you want to have as much RAM in your computer as you can afford. If you are running a 32 bit operating system, then each program can make use 2GB to 3 GB of RAM.  If you have a 64bit OS, then each program can use as much RAM as you have available. 


Video card or GPU
              A video card or Graphics Processing Unit (GPU) is the part of the computer that connects to the monitor. The GPU takes the video image and puts it in a form that can be displayed on the monitor. In most laptops, this is built into the system and cannot be replaced. In most desktop computers, the GPU fits into a slot on the motherboard, and can often be upgraded.

GPUs are specialized processors that are designed for displaying images. A computer processor (like those discussed above) will have 1 or 2 stream of data. But a GPU is designed to process all the pixels on a screen very quickly, so it needs to be able to handle a lot of parallel streams of data. You can think of it as a specialized computer that is built to draw images quickly.
In recent years, imaging software has begun to take advantage of the capabilities of GPUs, and can use them to make your imaging work go faster. You can think of it like a little 'helper computer' that is already installed in your system. It knows how to do just one thing - process images - and it can do that very quickly. Programmes that can take advantage of this capability are said to be GPU accelerated.
Like many parts of computing, however, the promise does not always turn into reality. There are some places where GPU acceleration works well, and some where it adds no additional speed.  If you are processing video, making use of GPU acceleration is going to be a big help.  If you are running a still photo application, then most of the time it will not make much difference.

Digital storage
Every computer needs a storage device to store the operating system, programmes and files. In the vast majority of cases, this is provided by one or more hard drives. Most computers have at least one internal hard drive, and they have the capability to connect to external drives.  
Most computers also have the ability to read and write optical disc like CD, DVD or Blu-ray. These are used for loading data and programmes on and off the computer.
Digital storage is a critical part of any computer system, and using storage properly can be the difference between keeping your photos and losing them. For more information, take the course on digital storage and backups.

IV.  Computer Software


Software is a generic term for organized collections of computer data and instructions, often broken into two major categories: system software that provides the basic non-task-specific functions of the computer, and application software which is used by users to accomplish specific tasks.

System software is responsible for controlling, integrating, and managing the individual hardware components of a computer system so that other software and the users of the system see it as a functional unit without having to be concerned with the low-level details such as transferring data from memory to disk, or rendering text onto a display. Generally, system software consists of an operating system and some fundamental utilities such as disk formatters, file managers, display managers, text editors, user authentication (login) and management tools, and networking and device control software.
Application software, on the other hand, is used to accomplish specific tasks other than just running the computer system. Application software may consist of a single program, such as an image viewer; a small collection of programs (often called a software package) that work closely together to accomplish a task, such as a spreadsheet or text processing system; a larger collection (often called a software suite) of related but independent programs and packages that have a common user interface or shared data format, such as Microsoft Office, which consists of closely integrated word processor, spreadsheet, database, etc.; or a software system, such as a database management system, which is a collection of fundamental programs that may provide some service to a variety of other independent applications.

Software is created with programming languages and related utilities, which may come in several of the above forms: single programs like script interpreters, packages containing a compiler, linker, and other tools; and large suites (often called Integrated Development Environments) that include editors, debuggers, and other tools for multiple languages. 


V.  Operating System

An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is a vital component of the system software in a computer system. Application programs usually require an operating system to function.
Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources.
For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.
Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Types of Operating System

Real-time
A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. The main objective of real-time operating systems is their quick and predictable response to events. They have an event-driven or time-sharing design and often aspects of both. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
Multi-user
A multi-user operating system allows multiple users to access a computer system at the same time. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time. Single-user operating systems have only one user but may allow multiple programs to run at the same time.
Multi-tasking vs. single-tasking
A multi-tasking operating system allows more than one program to be running at a time, from the point of view of human time scales. A single-tasking system has only one running program. Multi-tasking can be of two types: pre-emptive and co-operative. In pre-emptive multitasking, the operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like operating systems such as Solaris and Linux support pre-emptive multitasking, as does AmigaOS. Cooperative multitasking is achieved by relying on each process to give time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking. 32-bit versions of both Windows NT and Win9x, used pre-emptive multi-tasking. Mac OS prior to OS X used to support cooperative multitasking.
Distributed
Further information: Distributed system
A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they make a distributed system.
Embedded
Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources. They are very compact and extremely efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems.


History
Early computers were built to perform a series of single tasks, like a calculator. Operating systems did not exist in their modern and more complex forms until the early 1960s. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Hardware features were added that enabled use of runtime libraries, interrupts, and parallel processing. When personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers.
In the 1940s, the earliest electronic digital systems had no operating systems. Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the printing of payroll checks from data on punched paper cards. After programmable general purpose computers were invented, machine languages (consisting of strings of the binary digits 0 and 1 on punched paper tape) were introduced that sped up the programming process (Stern, 1981).
Description: http://bits.wikimedia.org/static-1.21wmf11/skins/common/images/magnify-clip.png
In the early 1950s, a computer could execute only one program at a time. Each user had sole use of the computer for a limited period of time and would arrive at a scheduled time with program and data on punched paper cards and/or punched tape. The program would be loaded into the machine, and the machine would be set to work until the program completed or crashed. Programs could generally be debugged via a front panel using toggle switches and panel lights. It is said that Alan Turing was a master of this on the early Manchester Mark 1 machine, and he was already deriving the primitive conception of an operating system from the principles of the Universal Turing machine.
 Later machines came with libraries of programs, which would be linked to a user's program to assist in operations such as input and output and generating computer code from human-readable symbolic code. This was the genesis of the modern-day operating system. However, machines still ran a single job at a time. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority.

Mainframes                                                                                                    

Through the 1950s, many major features were pioneered in the field of operating systems, including batch processing, input/output interrupt, buffering, multitasking, spooling, runtime libraries, link-loading, and programs for sorting records in files. These features were included or not included in application software at the option of application programmers, rather than in a separate operating system used by all applications. In 1959 the SHARE Operating System was released as an integrated utility for the IBM 704, and later in the 709 and 7090 mainframes, although it was quickly supplanted by IBSYS/IBJOB on the 709, 7090 and 7094.
During the 1960s, IBM's OS/360 introduced the concept of a single OS spanning an entire product line, which was crucial for the success of the System/360 machines. IBM's current mainframe operating systems are distant descendants of this original system and applications written for OS/360 can still be run on modern machines.
 OS/360 also pioneered the concept that the operating system keeps track of all of the system resources that are used, including program and data space allocation in main memory and file space in secondary storage, and file locking during update. When the process is terminated for any reason, all of these resources are re-claimed by the operating system.
The alternative CP-67 system for the S/360-67 started a whole line of IBM operating systems focused on the concept of virtual machines. Other operating systems used on IBM S/360 series mainframes included systems developed by IBM: COS/360 (Compatibility Operating System), DOS/360 (Disk Operating System), TSS/360 (Time Sharing System), TOS/360 (Tape Operating System), BOS/360 (Basic Operating System), and ACP (Airline Control Program), as well as a few non-IBM systems: MTS (Michigan Terminal System), MUSIC (Multi-User System for Interactive Computing), and ORVYL (Stanford Timesharing System).
Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGrOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BC system.
General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.
In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying computer architectures to appear to be the same as others in a series. In fact most 360s after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.
The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
·         Burroughs MCP – B5000, 1961 to Unisys Clearpath/MCP, present.
·         IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
·         IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
·         UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.

Microcomputers

Mac OS by Apple Computer became the first widespread OS to feature a graphical user interface. Many of its features such as windows and icons would later become commonplace in GUIs.
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as monitors. One notable early disk operating system was CP/M, which was supported on many early microcomputers and was closely imitated by Microsoft's MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS). In the '80s, Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.
The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multi-tasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the NEXT STEPoperating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.
The GNU Project was started by activist and programmer Richard Stallman with the goal of creating a complete free software replacement to the proprietary UNIX operating system. While the project was highly successful in duplicating the functionality of various parts of UNIX, development of the GNU Hurd kernel proved to be unproductive. In 1991, Finnish computer science student Linus Torvalds, with cooperation from volunteers collaborating over the Internet, released the first version of the Linux kernel. It was soon merged with the GNU user space components and system software to form a complete operating system. Since then, the combination of the two major components has usually been referred to as simply "Linux" by the software industry, a naming convention that Stallman and the Free Software Foundation remain opposed to, preferring the name GNU/Linux. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.

Examples of Operating System

UNIX and UNIX-like operating systems

Unix was originally written in assembly language. Ken Thompson wrote B, mainly based on BCPL, based on his experience in the MULTICS project. B was replaced by C, and Unix, rewritten in C, developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system.
The UNIX-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "UNIX-like" is commonly used to refer to the large set of operating systems which resemble the original UNIX.
Unix-like systems run on a wide variety of computer architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free UNIX variants, such as Linux and BSD, are popular in these areas.
Four operating systems are certified by the The Open Group (holder of the Unix trademark) as Unix. HP's HP-UX and IBM's AIX are both descendants of the original System V Unix and are designed to run only on their respective vendor's hardware. In contrast, Sun Microsystems's Solaris Operating System can run on multiple types of hardware, including x86 and Sparc servers, and PCs. Apple's OS X, a replacement for Apple's earlier (non-Unix) Mac OS, is a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.

BSD and its descendants

A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The Internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NextStep.
BSD has its roots in Unix. In 1974, University of California, Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school's undergraduates modified Unix even more in order to take advantage of the computer's hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley's version of Unix instead of the official one distributed by AT&T.
Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NretBSD.
OS X
OS X (formerly "Mac OS X") is a line of open core graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997. The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0 "Cheetah") following in March 2001. Since then, six more distinct "client" and "server" editions of OS X have been released, the most recent being OS X 10.8 "Mountain Lion", which was first made available on February 16, 2012 for developers, and was then released to the public on July 25, 2012. Releases of OS X are named after big cats.
Prior to its merging with OS X, the server edition – OS X Server – was architecturally identical to its desktop counterpart and usually ran on Apple's line of Macintosh server hardware. OS X Server included work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others. With Mac OS X v10.7 Lion, all server aspects of Mac OS X Server have been integrated into the client version and the product re-branded as "OS X" (dropping "Mac" from the name). The server tools are now offered as an application.
Linux and GNU
Linux (or GNU/Linux) is a Unix-like operating system that was developed without any actual Unix code, unlike BSD and its variants. Linux can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest that Linux is used on 1.82% of all personal computers, it has been widely adopted for use in servers and embedded system (such as cell phones). Linux has superseded Unix in most places, and is used on the 10 most powerful supercomputers in the world. The Linux kernel is used in some popular distributions, such as Red Hat, Debian, Ubuntu, Linux Mint and Google's Android.
The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to Unix but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland. In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts with the Linux kernel in order to create a full-fledged operating system.
Google Chromium OS
Main article: Google Chromium OS
Chromium is an operating system based on the Linux kernel and designed by Google. Since Chromium OS targets computer users who spend most of their time on the Internet, it is mainly a web browser with limited ability to run local applications, though it has a built-in file manager and media player. Instead, it relies on Internet applications (or Web apps) used in the web browser to accomplish tasks such as word processing, as well as online storage for storing most files.

Microsoft Windows

Microsoft Windows is a family of proprietary operating systems designed by Microsoft Corporation and primarily targeted to Intel architecture based computers, with an estimated 88.9 percent total usage share on Web connected computers. The newest version is Windows 8 for workstations and Windows Server 2012for servers. Windows 7 recently overtook Windows XP as most used OS.
 Microsoft Windows originated in 1985 as an operating environment running on top of MS-DOS, which was the standard operating system shipped on most Intel architecture personal computers at the time. In 1995, Windows 95 was released which only used MS-DOS as a bootstrap. For backwards compatibility, Win9x could run real-mode MS-DOSr and 16 bits Windows 3.x drivers. Windows ME, released in 2000, was the last version in the Win9x family. Later versions have all been based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 microprocessors, although Windows 8 will support ARM architecture. In the past, Windows NT supported non-Intel architectures.
Server editions of Windows are widely used. In recent years, Microsoft has expended significant capital in an effort to promote the use of Windows as a server operating system. However, Windows' usage on servers is not as widespread as on personal computers, as Windows competes against Linux and BSD for server market share.

Other

There have been many operating systems that were significant in their day but are no longer so, such as AmigaOS; OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300; RISC OS; MorphOS and FreeMint. Some are still used in niche markets and continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research.

VI. Techincal People
For most people, calling technical support is somewhere near dental work on a list of fun things to do. Believe it or not, calling tech support for a computer problem doesn't have to ruin your day.
The ideas behind these tips apply outside the computer world, too, so feel free to keep them in mind when your smartphone quits checking email or your DRV is stuck on one channel.
I can't promise that the experience will be enjoyable, but there are several things you can do to help make talking to tech support less painful for you than it may have been in the past.

Be Prepared Before Calling

Before you pick up the phone, make sure you're prepared to explain your problem. The better prepared you are, the less time you'll spend talking to tech support.
The exact things you should have ready will vary depending on your problem but here are several to keep in mind:
·         If you have an error message: What's the exacterror message on your screen?
·         If you don't have an error message: What exactly is your computer doing? "It just doesn't work" isn't going to cut it.
·         When did the problem start happening?
·         Did anything else happen at the same time the problem started? (e.g. a blue screen of death, smoke coming from the computer, virus warning, etc.)
·         What have you already done to troubleshoot the problem?
·         Has the problem changed since it first started happening (e.g. computer shuts off more frequently, error message appears at a different time now, etc.)
I recommend writing all of this down before requesting any tech support.

Communicate Clearly

Working with technical support is all about communication. The entire reason for your call is to communicate to the support person what the problem is and for them to communicate back to you what you need to do (or they need to do) to fix your problem.
The person on the other end of the phone might be 10 miles away or 10,000 miles away. He or she might be from the same part of your country or from a part of a country you didn't even know existed. That said, you'll prevent a lot of needless confusion and frustration if you talk slowly and enunciate properly.
Also, make sure you're calling from a quiet area. A barking dog or screaming child is unlikely to improve upon any communication problem you may be having already.

Be Thorough and Specific

I touched on this a little in the Be Prepared Before Calling tip above, but the need to be thorough and specific demands its own section! You may be well aware of the trouble your computer has been having but the tech support person is not. You have to tell the whole story in as much detail as possible.
For example, saying "My computer just quit working" doesn't say anything at all. There are millions of ways a computer might not "be working" and the ways to fix those problems vary tremendously. I always recommend stepping through, in great detail, the process that produces the problem.
If your computer won't turn on, for example, you might describe the problem to tech support like this:
"I hit the power button on my computer and a green light comes on the front of my computer and on my monitor. Some text shows up on the screen for just a second and then the whole thing shuts off. The monitor stays on but all the lights on the front of my computer case turn off. If I power it on again, the same thing happens over and over."

Repeat the Details

Another way to avoid confusion when communicating is by repeating what the person you're talking to is saying.
For example, let's say tech support advises you to "Click on x, then click on y, then select z." You should repeat back "Okay, I clicked on x, then I clicked on y, then I selected z." This way, tech support is confident that you completed the steps as asked and you're confident that you fully understood what was asked of you.
Answering "Okay, I did that" doesn't confirm that you understood each other. Repeating the details will help avoid a lot of confusion, especially if there's a language barrier.

Don't Get Emotional

No one likes computer problems. They even frustrate me. Getting emotional, however, solves absolutely nothing. All getting emotional does is lengthen the amount of time you have to talk to tech support which will frustrate you even more.
Try to keep in mind that the person you're talking to on the phone didn't design the hardware or program the software that's giving you problems. He or she has been hired to help solve your problem based on the information given to them by the company and from you.
You're only in control of the information you're providing so your best bet is to take another look at some of the tips above and try to communicate as clearly as you possibly can.

VII. Electricity and Power Supply
A power supply is a device that supplies electric power to an electrical load. The term is most commonly applied to electric power converters that convert one form of electrical energy to another, though it may also refer to devices that convert another form of energy (mechanical, chemical, solar) to electrical energy. A regulated power supply is one that controls the output voltage or current to a specific value; the controlled value is held nearly constant despite variations in either load current or the voltage supplied by the power supply's energy source.
Every power supply must obtain the energy it supplies to its load, as well as any energy it consumes while performing that task, from an energy source. Depending on its design, a power supply may obtain energy from:
·         Electrical energy transmission systems. Common examples of this include power supplies that convert AC line voltage to DC voltage.
·         Energy storage devices such as batteries and fuel cells.
·         Electromechanical systems such as generators and alternators.
Solar power.
A power supply may be implemented as a discrete, stand-alone device or as an integral device that is hardwired to its load. Examples of the latter case include the low voltage DC power supplies that are part of desktop computers and consumer electronics devices.
Commonly specified power supply attributes include:
·         The amount of voltage and current it can supply to its load.
·         How stable its output voltage or current is under varying line and load conditions.
·         How long it can supply energy without refueling or recharging (applies to power supplies that employ portable energy sources).

Types of Power Supply
Power supplies for electronic devices can be broadly divided into line-frequency (or "conventional") and switching power supplies. The line-frequency supply is usually a relatively simple design, but it becomes increasingly bulky and heavy for high-current equipment due to the need for large mains-frequency transformers and heat-sinked electronic regulation circuitry. Conventional line-frequency power supplies are sometimes called "linear," but that is a misnomer because the conversion from AC voltage to DC is inherently non-linear when the rectifiers feed into capacitive reservoirs. Linear voltage regulators produce regulated output voltage by means of an active voltage divider that consumes energy, thus making efficiency low. A switched-mode supply of the same rating as a line-frequency supply will be smaller, is usually more efficient, but would be more complex.
Battery

A battery is a device that converts stored chemical energy to electrical energy. Batteries are commonly used as energy sources in many household and industrial applications.
There are two types of batteries: primary batteries (disposable batteries), which are designed to be used once and discarded, and secondary batteries (rechargeable batteries), which are designed to be recharged and used multiple times. Batteries come in many sizes, from miniature cells used in hearing aids and wristwatches to room-size battery banks that serve as backup power supplies in telephone exchanges and computer data centers.
DC power supply

An AC powered unregulated power supply usually uses a transformer to convert the voltage from the wall outlet (mains) to a different, nowadays usually lower, voltage. If it is used to produce DC, a rectifier is used to convert alternating voltage to a pulsating direct voltage, followed by a filter, comprising one or more capacitorsresistors, and sometimes inductors, to filter out (smooth) most of the pulsation. A small remaining unwanted alternating voltage component at mains or twice mains power frequency(depending upon whether half- or full-wave rectification is used)—ripple—is unavoidably superimposed on the direct output voltage.
For purposes such as charging batteries the ripple is not a problem, and the simplest unregulated mains-powered DC power supply circuit consists of a transformer driving a single diode in series with a resistor.

AC power supply
An AC power supply typically takes the voltage from a wall outlet (mains supply) and lowers it to the desired voltage. Some filtering may take place as well.
Linear regulated power supply

The voltage produced by an unregulated power supply will vary depending on the load and on variations in the AC supply voltage. For critical electronics applications alinear regulator may be used to set the voltage to a precise value, stabilized against fluctuations in input voltage and load. The regulator also greatly reduces the ripple and noise in the output direct current. Linear regulators often provide current limiting, protecting the power supply and attached circuit from overcurrent.
Adjustable linear power supplies are common laboratory and service shop test equipment, allowing the output voltage to be adjusted over a range. For example, a bench power supply used by circuit designers may be adjustable up to 30 volts and up to 5 amperes output. Some can be driven by an external signal, for example, for applications requiring a pulsed output.
AC/DC supply
Main article: AC/DC (electricity)
In the past, mains electricity was supplied as DC in some regions, AC in others. Transformers cannot be used for DC, but a simple, cheap unregulated power supply could run directly from either AC or DC mains without using a transformer. The power supply consisted of a rectifier and a filter capacitor. When operating from DC, the rectifier was essentially a conductor, having no effect; it was included to allow operation from AC or DC without modification.
Switched-mode power supply

In a switched-mode power supply (SMPS), the AC mains input is directly rectified and then filtered to obtain a DC voltage. The resulting DC voltage is then switched on and off at a high frequency by electronic switching circuitry, thus producing an AC current that will pass through a high-frequency transformer or inductor. Switching occurs at a very high frequency (typically 10 kHz — 1 MHz), thereby enabling the use of transformers and filter capacitors that are much smaller, lighter, and less expensive than those found in linear power supplies operating at mains frequency. After the inductor or transformer secondary, the high frequency AC is rectified and filtered to produce the DC output voltage. If the SMPS uses an adequately insulated high-frequency transformer, the output will be electrically isolatedfrom the mains; this feature is often essential for safety.
Switched-mode power supplies are usually regulated, and to keep the output voltage constant, the power supply employs a feedback controller that monitors current drawn by the load. The switching duty cycle increases as power output requirements increase.
SMPSs often include safety features such as current limiting or a crowbar circuit to help protect the device and the user from harm.[1] In the event that an abnormal high-current power draw is detected, the switched-mode supply can assume this is a direct short and will shut itself down before damage is done. PC power supplies often provide a power good signal to the motherboard; the absence of this signal prevents operation when abnormal supply voltages are present.
SMPSs have an absolute limit on their minimum current output.[2] They are only able to output above a certain power level and cannot function below that point. In a no-load condition the frequency of the power slicing circuit increases to great speed, causing the isolated transformer to act as a Tesla coil, causing damage due to the resulting very high voltage power spikes. Switched-mode supplies with protection circuits may briefly turn on but then shut down when no load has been detected. A very small low-power dummy load such as a ceramic power resistor or 10-watt light bulb can be attached to the supply to allow it to run with no primary load attached.
Power factor has become an issue of concern for computer manufacturers. Switched mode power supplies have traditionally been a source of power line harmonics and have a very poor power factor. The rectifier input stage distorts the waveshape of current drawn from the supply; this can produce adverse effects on other loads. The distorted current causes extra heating in the wires and distribution equipment. Switched mode power supplies in a building can result in poor power quality for other utility customers. Customers may face higher electric bills for a low power factor load.

Programmable power supply

Programmable power supplies allow for remote control of the output voltage through an analog input signal or a computer interface such as RS232 or GPIB. Variable properties include voltage, current, and frequency (for AC output units). These supplies are composed of a processor, voltage/current programming circuits, current shunt, and voltage/current read-back circuits. Additional features can include overcurrent, overvoltage, and short circuit protection, and temperature compensation. Programmable power supplies also come in a variety of forms including modular, board-mounted, wall-mounted, floor-mounted or bench top.
Programmable power supplies can furnish DC, AC, or AC with a DC offset. The AC output can be either single-phase or three-phase. Single-phase is generally used for low-voltage, while three-phase is more common for high-voltage power supplies.

Uninterruptible power supply
An uninterruptible power supply (UPS) takes its power from two or more sources simultaneously. It is usually powered directly from the AC mains, while simultaneously charging a storage battery. Should there be a dropout or failure of the mains, the battery instantly takes over so that the load never experiences an interruption. In a computer installation, this gives the operators time to shut down the system in an orderly way. Other UPS schemes may use an internal combustion engine or turbine to continuously supply power to a system in parallel with power coming from the AC. The engine-driven generators would normally be idling, but could come to full power in a matter of a few seconds in order to keep vital equipment running without interruption. Such a scheme might be found in hospitals or telephone central offices.
High-voltage power supply
High voltage refers to an output on the order of hundreds or thousands of volts. High-voltage supplies use a linear setup to produce an output voltage in this range.
Additional features available on high-voltage supplies can include the ability to reverse the output polarity along with the use of circuit breakers and special connectors intended to minimize arcing and accidental contact with human hands. Some supplies provide analog inputs that can be used to control the output voltage, effectively turning them into high-voltage amplifiers albeit with very limited bandwidth.
Voltage multipliers
Main article: Voltage multiplier
A voltage multiplier is an electrical circuit that converts AC electrical power from a lower voltage to a higher DC voltage, typically by means of a network of capacitors and diodes. The input voltage may be doubled (voltage doubler), tripled (voltage tripler), quadrupled (voltage quadrupler), and so on. These circuits allow high voltages to be obtained using a much lower voltage AC source.
Typically, voltage multipliers are composed of half-wave rectifiers, capacitors, and diodes. For example, a voltage tripler consists of three half-wave rectifiers, three capacitors, and three diodes (as in theCockcroft Walton multiplier). Full-wave rectifiers may be used in a different configuration to achieve even higher voltages. Also, both parallel and series configurations are available. For parallel multipliers, a higher voltage rating is required at each consecutive multiplication stage, but less capacitance is required. The voltage rating of the capacitors determines the maximum output voltage.



VIII.  Parts of the Computer

·         MotherBoard Parts and Functions

Motherboard parts
1 - Firewire header
Firewire is also known as IEEE 1394. It is basically a high performance serial bus for digital and audio equipment to exchange data. The technology preceded USB but yet is faster than any current USB port. Often used for transferring digital video to the PC straight from a digital camera. The FireWire header onboard means you can install a FireWire port on your machine. Again these cables are often supplied as an optional extra which you will need to check with the retailer to see if they are supplied with your board.
2 - PCI Express 16x slots
Now the most common slot for Graphics cards, the PCI Express 16x slots provides 16 separate lanes or data transfer. PCI express 1.0 slots offer a data transfer rate of 250MB/s the second generation of PCI express (PCI Express 2.0) offers twice the data rate at 500MB/s. Currently in development is PCI Express 3.0 which offers 1GB/s of data transfer. PCI Express 16x slots are also the basis for both SLI and Crossfire multi graphics card setups. With the increasing demands graphics cards are putting on systems, no less than a 16 lane slot will be good enough for any modern graphics card.
3 - PCI Express 1x Slot
Like the PCI Express 16x above the 1x slot uses exactly the same system but only has a single lane of serial data transfer. These slots are used for expansion cards that do no require the same amount of data transfer that a graphics card requires. You will usually find components such as tv tuners, network cards and sound cards make use of the PCI Express 1x slot. You will also notice the difference in size between the 1x and the 16x slots. The PCI Express 1x slot is noticeably smaller and easy to spot.
4 - Chipset - North Bridge (with heatsink)
The Motherboards chipset can be described as what sets it apart from other boards in its category. Different chipsets contain different features and components. A chipset is a number of integrated circuits built onto the board to provide specific functions e.g. one part of the chipset may be an onboard component such as a modem or sound chip. Other parts may be used to control the CPU functions. Most chipsets are designed to work with only one "class" of CPU although now many older chipsets support more than one type of CPU such as socket 7 which supports the Pentium, Cyrix 686, Cyrix MII, AMD K6 and K6-2. There are certain restrictions though to what type of processor a chipset can handle because of the logic that the CPU uses to access the memory and its cache etc. Since these chips are working harder with each generation, motherboard manufacturers have started to put heatsinks and active coolers (fans) on the main parts of the chipset to disperse some of the heat. For more information on chipsets see our What does a chipset do article.
5 and 8 - ATX Power connector
The standard ATX power connector, the cable for this will be coming from the PSU, a clip is normally provided to make sure you get them in the correct order. As a tip, don't try to push too hard if its stuck, check to see that it is in the correct way, I have seen plenty of power connectors where the pins have pushed out some of the connectors, these can be difficult to get back into place, so its best to be careful.
6 - CPU (Central Processing Unit) socket
All the CPU "sockets look very similar, however they are different in the way they have different amount of pins and in different layouts. There are currently two major CPU socket types PGA and LGA. PGA or Pin Grad Array uses a system of pins on the CPU and holes on the socket to line up and hold a CPU in place. The introduction of the ZIF (Zero Insertion Force) socket for PGA types allowed the CPU's to be lined up without any pressure on the CPU until a level is pulled down. LGA or Land Grid Array uses a system of gold plated copper pads that make contact with the motherboard. It is very important to read your motherboard manual to discover what types of CPU's you motherboard supports as most motherboards are aimed at a specific type of CPU.
7 - DIMM (Double Inline Memory Module) slots
DIMM's are by far and away the most used memory types in today's computers. They vary in speeds and standards however and they need to match up to what your motherboard has been designed to take. The four standards of DIMM's being used at the moment are SDR (Single Data Rate), DDR (Double Data Rate), DDR2 and DDR3. The speeds of memory can vary between 66Mhz to 1600Mhz.

9 - Motherboard controls
Not available on all motherboards, but some allow direct control of the motherboard via simple buttons. Power switch, error checking, CMOS clearing, passwords and more features can be accessed directly on the motherboard on some models.
10 - Chipset - South Bridge
When we talk about chipsets you mainly only ever hear about the North bridge. Even those into PC technology have a hard time naming the south bridges without looking them up. Names like Nforce 2 and KT600 are North bridges. The South Bridge does an important job as well. It handles things like the PCI bus, onboard Network and sound chips as well as the IDE and S-ATA buses.
11 - Serial ATA Connector
Serial ATA or more commonly seen as S-ATA is a new way of connecting your Hard Drives to your PC. S-ATA drives have the capability of being faster than the IDE counterparts and also have smaller thinner cables which help with the airflow of the system. S-ATA hard disks are fast becoming the norm for hard drive technology. Current motherboards feature both IDE and S-ATA connectors to facilitate all types of storage hardware.
12 - USB 2.0 header
As well as having USB ports on the rear of the motherboard, motherboard manufacturers often add a couple of USB headers so you can connect optional cables for extra USB ports. These cables are often supplied and you only need to add them on if you need the extra connectivity. USB 2.0 replaced USB 1.1 as a much faster solution. It is backwards compatible meaning all USB 1.1 devices will work in these new USB 2.0 ports.
13 - Motherboard Battery
The battery gives the board a small amount of power in order to store some vital data on your machine when the power is off. Data stored is that like the time and date so you don't have to reset them every time you boot the machine up. Motherboard batteries are usually long lasting Lithium batteries. Removing this can reset all the data on your machine including the BIOS settings, however not replacing this correctly can lead to irreparable damage to the motherboard. Only remove the battery if it is dead or if you can't have access any other way to resetting the data on your machine by use of the clear CMOS jumper or something similar.
14 - PCI (Peripheral Component Interconnect) slot
The PCI bus (not PCI express) is now an older technology and although the PCI slots are still available, they have decreased in number and are being replaced by the PCI Express 1x slots. Its unlikely that you will get a motherboard without a PCI slot at the moment due to the fact that a lot of components still use the standard PCI slot. It would be awkward to upgrade to a system without PCI slots as it may mean upgrading more components than you would like to,
15 - Floppy Drive Connector
More simple than the IDE connector you only have to remember to get the red line to pin 1 of the connector and the red line to pin 1 on the floppy drive, This port is only to be used with floppy drives. You may not have a floppy controller on your motherboard as its slowly being phased out as more people are using writable CD's and DVDs to transfer data, to store data and to use as boot up discs.
16 - IDE connector Not on Diagram
The connector to which you will insert an IDE cable (supplied with motherboard) IDE cables connect devices such as hard disks, CD Drives and DVD Drives. The current 4 standards of IDE devices are ATA 33/66/100 and 133. the numbers specify the amount of data in Mb/s in a max burst situation. In reality there is not much chance of getting a sustain data rate of this magnitude. Both the connectors and devices are backwards compatible with each other, however they will only run at the slowest rated speed between them. All IDE cables will come with a red line down one side, this red line is to show which way it should be plugged in. The red line should always connect to pin one of the IDE port. Checking your motherboard documentation should show you which end is pin one. In some cases it will be written on the board itself.
In the case of ATA 66/100/133 there is a certain order that you plug devices in, the cable is colour coded to help you get them in the correct order.
§  The Blue connector should be connected to the system board
§  The Black connector should be connected to the master device
§  The Grey Connector should be connected to the slave device
17 - BIOS (Basic Input Output System) Chip - Not on Diagram
The BIOS holds the most important data for your machine, if configured incorrectly it could cause your computer not to boot correctly or not at all. The BIOS also informs the PC what the motherboard supports in terms off CPU etc. This is why when a new CPU is introduced that physically fits into a slot or socket you may need a BIOS update to support it. The main reason for this is that different CPU's use different logics and methods and so the BIOS has to understand certain instructions from the CPU to recognize it.

·         Hard Disk Drives Parts and Functions

What is hard disk drive?
hard disk drive (often shortened as hard disk, hard drive, or HDD) is a non-volatile storage device that stores digitally encoded data on rapidly rotating rigid (i.e. hard) platters with magnetic surfaces. Strictly speaking, “drive” refers to the motorized mechanical aspect that is distinct from its medium, such as a tape drive and its tape, or a floppy disk drive and its floppy disk. Early HDDs had removable media; however, an HDD today is typically a sealed unit (except for a filtered vent hole to equalize air pressure) with fixed media.
How hard drive works?

A hard disk is a sealed unit containing a number of platters in a stack. Hard disks may be mounted in a horizontal or a vertical position. In this description, the hard drive is mounted horizontally.
Electromagnetic read/write heads are positioned above and below each platter. As the platters spin, the drive heads move in toward the center surface and out toward the edge. In this way, the drive heads can reach the entire surface of each platter.

Hard drive physical component

PLATTERS: 
Platter is a circular, metal disk that is mounted inside a hard disk drive. Several platters are mounted on a fixed spindle motor to create more data storage surfaces in a smaller area. The platter has a core made up of aluminium or glass substrate, covered with a thin layer of Ferric oxide or cobalt alloy. On both sides of the substrate material, a thin coating is deposited by a special manufacturing technique. This, thin coating where actual data is stored is the media layer.


When the magnetic media is applied to the surface of the substrate material, a thin lubricating layer is applied to protect the material. This complex three layered media is discussed in detail as follows:

THE SUBSTRATE MATERIAL: 
The bulk material of which platters are made up, forms the base on which media layer is deposited. The substrate has no specific function but to support the media layer. The most commonly used material for making this physical layer is an Aluminium alloy. This alloy is rigid, lightweight, stable, inexpensive, easy to work with and is readily available. Earlier, since the gap between the heads and the platter was relatively high, the platter surface being smooth and flat was less of an issue. However, as technology advances, the gap between heads and platters is decreasing and the speed that the platters spin at is increasing. For this reason demand for alternatives on the platter material are increasing. Glass platters are replacing aluminium platters because they provide improved rigidity, better quality, thinner platters, and thermal stability.

MEDIA LAYER:
The substrate material forms the base upon which actual recording media is deposited. The media layer is a thin coating of magnetic material applied to the surface of the platters and where the actual data is stored. Its thickness is only a few millionths of an inch.
Special techniques are employed for the deposition of magnetic material on the substrate material. A thin coating is deposited on both sides of the substrate, mostly by vacuum deposition process called magnetron sputtering. Another such method is electroplating, using a process similar to that used in electroplating jewelry.

PROTECTIVE LAYER:
On the top of the magnetic media, is applied a super-thin, protective, lubricating layer. This layer is called the protective layer because it protects the disk from damage caused by accidental contact from the heads, “head crash” or other foreign material from entering the drive

PLATTER DIVISIONS: 
In order to get maintain the organized storage and retrieval of data the platters are organized into specific structures. These specific structures include tracks, sectors, and clusters.

TRACKS: 
Each platter is broken into thousands of tightly packed concentric circles, known as tracks. These tracks resemble the structure of annual rings of a tree. All the information stored on the hard disk is recorded in tracks. Starting from zero at the outer side of the platter, the number of tracks goes on increasing to the inner side. Each track can hold a large amount of data counting to thousands of bytes.

SECTORS: 
Each track is further broken down into smaller units called sectors. As sector is the basic unit of data storage on a hard disk. A single track typically can have thousands of sectors and each sector can hold more than 512 bytes of data. A few additional bytes are required for control structures and error detection and correction.

CLUSTERS: 
Sectors are often grouped together to form Clusters.

READ/WRITE HEADS: 
The heads are an interface between the magnetic media where the data is stored and electronic components in the hard disk. The heads convert the information, which is in the form of bits to magnetic pulses when it is to be stored on the platter and reverses the process while reading.


The heads are the most sophisticated part of the hard disk. Each platter has two read/write heads, one mounted on the top and the other one at the bottom. These heads are mounted on head sliders, which are suspended at the ends of head arms. The head arms are all fused into a singular structure called actuator, which is responsible for their movement.

THE SPINDLE MOTOR: 

Spindle motor plays an important role in hard drive operation by turning the hard disk platters. A spindle motor must provide stable, reliable, and consistent turning power for many hours of continuous use. Many hard drive failures occur due to spindle motor not functioning properly

HARD DISK LOGIC BOARD:
Hard disk is made with an intelligent circuit board integrated into the hard disk unit. It is mounted on the bottom of the base casting exposed to the outer side. The read/write heads are linked to the logic board through a flexible ribbon cable.

DRIVE BAY: 
The entire hard disk is mounted in an enclosure designed to protect it from the outside air. It is necessary to keep the internal environment of the hard disk free of dust and other contaminants. These contaminants may get accumulated in the gap between the read/write heads and the platters, which usually leads to head crashes.

The bottom of the disk is also called base casting. The drive mechanics are placed in the base casting and a cover, usually made up of aluminium is placed on top to enclose heads and platters. The entire contents placed on the base and cover chamber are collectively known as the head-disk assembly. Once this assembly is opened, it would instantly contaminate the contents and eventually ruin the drive.
On the bottom of the base casting is present the logic board, which is separated from the base casting using a cushioning material.


·         Floppy Disk Drive Parts and Functions

The Disk

A floppy disk is a lot like a cassette tape:
·         Both use a thin plastic base material coated with iron oxide. This oxide is a ferromagneticmaterial, meaning that if you expose it to a magnetic field it is permanently magnetized by the field.
·         Both can record information instantly.
·         Both can be erased and reused many times.
·         Both are very inexpensive and easy to use.
If you have ever used an audio cassette, you know that it has one big disadvantage -- it is asequential device. The tape has a beginning and an end, and to move the tape to another song later in the sequence of songs on the tape you have to use the fast forward and rewind buttons to find the start of the song, since the tape heads are stationary. For a long audio cassette tape it can take a minute or two to rewind the whole tape, making it hard to find a song in the middle of the tape.
A floppy disk, like a cassette tape, is made from a thin piece of plastic coated with a magnetic material on both sides. However, it is shaped like a disk rather than a long thin ribbon. The tracks are arranged inconcentric rings so that the software can jump from "file 1" to "file 19" without having to fast forward through files 2-18. The diskette spins like a record and the heads move to the correct track, providing what is known as direct access storage.

The Drive

The major parts of a FDD include:
·         Read/Write Heads: Located on both sides of a diskette, they move together on the same assembly. Theheads are not directly opposite each other in an effort to prevent interaction between write operations on each of the two media surfaces. The same head is used for reading and writing, while a second, wider head is used for erasing a track just prior to it being written. This allows the data to be written on a wider "clean slate," without interfering with the analog data on an adjacent track.
·         Drive Motor: A very small spindle motor engages the metal hub at the center of the diskette, spinning it at either 300 or 360 rotations per minute (RPM).
·         Stepper Motor: This motor makes a precise number of stepped revolutions to move the read/write head assembly to the proper track position. The read/write head assembly is fastened to the stepper motor shaft.
·         Mechanical Frame: A system of levers that opens the little protective window on the diskette to allow the read/write heads to touch the dual-sided diskette media. An external button allows the diskette to be ejected, at which point the spring-loaded protective window on the diskette closes.
·         Circuit Board: Contains all of the electronics to handle the data read from or written to the diskette. It also controls the stepper-motor control circuits used to move the read/write heads to each track, as well as the movement of the read/write heads toward the diskette surface.
The read/write heads do not touch the diskette media when the heads are traveling between tracks. Electronic optics check for the presence of an opening in the lower corner of a 3.5-inch diskette (or a notch in the side of a 5.25-inch diskette) to see if the user wants to prevent data from being written on it.

FLOPPY DISK DRIVE TERMINOLOGY

1.         Floppy disk - Also called diskette. The common size is 3.5 inches.
2.        Floppy disk drive - The electromechanical device that reads and writes floppy disks.
3.        Track - Concentric ring of data on a side of a disk.
4.       Sector - A subset of a track, similar to wedge or a slice of pie.


·         USB Flash Drive Parts and Functions

USB Connector

·         The USB connector is the small, silver extension that extends from the main USB device. The connector is what inserts into the USB port of the computer. Because this part of the USB flash drive is easily damaged, newer USB flash drives come with a switch that pulls the connector into the main compartment. This avoids the problem of the connector melting, accumulating dust or being crushed. The USB connector inserts into any USB port in a computer, or the user can insert it into a hub for devices on a machine.

Memory Chip

·         The flash memory chip, a black chip placed on the main circuit board of the USB drive, is what stores the information. This chip is protected by an outer case. These cases are sometimes clear, so the user can see the flash memory chip on the USB drive's circuit board. The flash memory chip contains different amounts of memory, depending on the device purchased. Flash drives started with a capacity of a few megabytes, but they continue to increase in capacity. Flash drives can hold gigabytes of information of flash memory chips.

LED and Crystal Oscillator

·         USB flash drives contain light-emitting diodes (LEDs). LEDs are lighted components that indicate processing or connection for the user. A green LED is used to mean "Ready." It indicates that the USB drive is connected and ready to save information. Some flash drives have a red LED light to indicate that an issue exists with the connection. A crystal oscillator is a small component on the flash drive's circuit board that sends a frequency signal. These oscillators are used in watches and other devices that keep time. Oscillators are used to control the output of the flash drive.

·         Compact Disk Drive and Functions
The individual parts of a compact disc provide unique graphic design challenges and opportunities for desktop publishers and designers. In this article we dissect a compact disc and analyze its manufactured anatomy, explaining how the different parts will affect your compact disc design. Knowing the medium you are designing for helps prevent unwelcome surprises in the final product.
Main printable area
The main section of the disc: This is where the audio or data is encoded. Colors printed on this surface will tend to appear darker than they would on white paper. Depending on the ink coverage, differing amounts of the silver surface will show through. Higher ink coverage (darker colors, in general) means you'll see less of the reflective surface showing through. Less ink coverage, with print dots more spaced apart (lighter colors, in general), will reveal more of the underlying disc surface. The only way to have something appear white anywhere on the compact disc surface is to print with white ink (see "white base coat" below).
Mirror band
This is the ring area just inside of the main print area. The mirror band is not encoded with data so it has a different reflective quality, appearing darker than any other part of the compact disc. Generally the mirror band is etched with the name of the manufacturer, as well as a number or barcode identification associated with the client audio master. The effect of printing on the mirror band is a darkening of the text or images as compared to that of the main print area. Just inside of the mirror band is the stacking ring.
Stacking ring
On the underside of each disc, this thin ring of raised plastic is used to keep a small amount of space between each disc when stacked up for boxing and/or shipping. It prevents the flat surfaces from scraping against each other, which could scratch either the printed tops or the readable bottoms of the discs. Even though it is on the underside, some manufacturers are unable to print over the stacking ring area due to a small "trough" created on the top surface when they mold their discs. Other manufacturers mold compact discs that are smooth on the top and have no problem printing over the stacking ring area.

Hub
This is the innermost portion of the disc, made of clear plastic, and includes the stacking ring. Printing over the hub area is similar to the effect of printing on transparency media. The lighter the color, the more the transparency effect is present, due to the small, widely spaced print dots that are used to produce light colors. With heavy ink coverage over the hub, the transparency is far less noticeable. However, all colors will appear different when printed over the clear plastic hub as compared to the other opaque surfaces of the compact disc.
A Basic Solution to the Inconsistencies
Applying a white base coat over the disc's entire print area before printing the design lessens the darkening effect of the mirror band, and also lessens the transparency effect of the plastic hub. The white base (sometimes termed "white flood") acts like a primer coat, so the final design more closely resembles printing on the white paper of standard jewel case inserts, wallets, posters etc. If your cd design includes photos, particularly faces, a white flood will make them look more natural. It can also help to match colors used on the printed inserts. Most manufacturers will not automatically suggest a white flood, and they may charge for it as they would any other ink, but it can make a big difference in the appearance of your designed disc.


·         Digital Versatile Disc
Digital versatile disc or Digital video disc (DVD), a small plastic disc used for the storage of digital data. The successor media to the compact disc (CD), a DVD can have more than 100 times the storage capacity of a CD. When compared to CD technology, DVD also allows for better graphics and greater resolution. In the case of an audio recording, where the data to be stored is in analog rather than digital form, the sound signal is sampled at a rate of 48,000 or 96,000 times a second, then each sample is measured and digitally encoded on the 43/4-in. (12-cm) disc as a series of microscopic pits on an otherwise polished surface. The disc is covered with a protective, transparent coating so that it can be read by a laser beam. As with other optical disks nothing touches the encoded portion, and the DVD is not worn out by the playing process. Because DVD players are backward compatible to existing technologies, they can play CD and CD-ROM discs; however, CD players cannot play DVD and DVD-ROM discs.

·         Blu-ray Disc
Blu-ray Disc (BD) is an optical disc storage medium designed to supersede the DVD format. The plastic disc is 120 mm in diameter and 1.2 mm thick, the same size as DVDs and CDs. Conventional (pre-BD-XL) Blu-ray Discs contain 25 GB per layer, with dual layer discs (50 GB) being the industry standard for feature-length video discs. Triple layer discs (100 GB) and quadruple layers (150 GB) are available for BD-XL re-writer drives.[3] The name Blu-ray Disc refers to the blue laser used to read the disc, which allows information to be stored at a greater density than is possible with the longer-wavelength red laser used for DVDs. The major application of Blu-ray Discs is as a medium for video material such as feature films. Besides the hardware specifications, Blu-ray Disc is associated with a set of multimedia formats. Generally, these formats allow for the video and audio to be stored with greater definition than on DVD..

IX. CMOS Manipulation

CMOS Manipulation of biological cells using a CMOS/microfluidic hybrid system is demonstrated. The hybrid system starts with a custom-designed CMOS (complementary metal-oxide semiconductor) chip fabricated in a semiconductor foundry. A microfluidic channel is post-fabricated on top of the CMOS chip to provide biocompatible environments. The motion of individual biological cells that are tagged with magnetic beads is directly controlled by the CMOS chip that generates microscopic magnetic field patterns using an on-chip array of micro-electromagnets. Furthermore, the CMOS chip allows high-speed and programmable reconfiguration of the magnetic fields, substantially increasing the manipulation capability of the hybrid system. Extending from previous work that verified the concept of the hybrid system, this paper reports a set of manipulation experiments with biological cells, which further confirms the advantage of the hybrid approach. To enhance the biocompatibility of the system, the microfluidic channel is redesigned and the temperature of the device is monitored by on-chip sensors. Combining microelectronics and microfluidics, the CMOS/microfluidic hybrid system presents a new model for a cell manipulation platform in biological and biomedical applications.


X. Beep Codes and Error Messages

During POST, the BIOS indicates the current testing phase by writing a hex code to I/O location 80h. If errors are encountered, either error beep codes or error messages are produced.
§  If an error occurs prior to video initialization, it is reported through a series of audio beep codes.
§  If an error occurs after video initialization, the error is displayed on the video screen.
Refer to the following tables for more information on BIOS error beep codes and error messages.
BIOS generated POST error beep codes (prior to video initialization)


POST error messages (after video initialization)

In the following table, the response section is divided into three types:
·         Warning: The message is displayed on the screen and the error is logged to the SEL. The system continues booting with a degraded state.
·         Pause: The message is displayed on the screen and the boot process is paused until the appropriate input is given to either continue the boot process or to take corrective action.
·         Halt: The message is displayed on the screen, an error is logged to the SEL, and the system cannot boot unless the error is corrected.
XI. Network Cabling

Cable is the medium through which information usually moves from one network device to another. There are several types of cable which are commonly used with LANs. In some cases, a network will utilize only one type of cable, other networks will use a variety of cable types. The type of cable chosen for a network is related to the network's topology, protocol, and size. Understanding the characteristics of different types of cable and how they relate to other aspects of a network is necessary for the development of a successful network.
The following sections discuss the types of cables used in networks and other related topics.
  • Unshielded Twisted Pair (UTP) Cable
  • Shielded Twisted Pair (STP) Cable
  • Coaxial Cable
  • Fiber Optic Cable
  • Cable Installation Guides
  • Wireless LANs
  • Unshielded Twisted Pair (UTP) Cable
Twisted pair cabling comes in two varieties: shielded and unshielded. Unshielded twisted pair (UTP) is the most popular and is generally the best option for school networks (See fig. 1).

The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The cable has four pairs of wires inside the jacket. Each pair is twisted with a different number of twists per inch to help eliminate interference from adjacent pairs and other electrical devices. The tighter the twisting, the higher the supported transmission rate and the greater the cost per foot. The EIA/TIA (Electronic Industry Association/Telecommunication Industry Association) has established standards of UTP and rated six categories of wire (additional categories are emerging).

Categories of Unshielded Twisted Pair

Category
Speed
Use
1
1 Mbps
Voice Only (Telephone Wire)
2
4 Mbps
LocalTalk & Telephone (Rarely used)
3
16 Mbps
10BaseT Ethernet
4
20 Mbps
Token Ring (Rarely used)
5
100 Mbps (2 pair)
100BaseT Ethernet
1000 Mbps (4 pair)
Gigabit Ethernet
5e
1,000 Mbps
Gigabit Ethernet
6
10,000 Mbps
Gigabit Ethernet

Unshielded Twisted Pair Connector

The standard connector for unshielded twisted pair cabling is an RJ-45 connector. This is a plastic connector that looks like a large telephone-style connector (See fig. 2). A slot allows the RJ-45 to be inserted only one way. RJ stands for Registered Jack, implying that the connector follows a standard borrowed from the telephone industry. This standard designates which wire goes with each pin inside the connector.

Shielded Twisted Pair (STP) Cable

Although UTP cable is the least expensive cable, it may be susceptible to radio and electrical frequency interference (it should not be too close to electric motors, fluorescent lights, etc.). If you must place cable in environments with lots of potential interference, or if you must place cable in extremely sensitive environments that may be susceptible to the electrical current in the UTP, shielded twisted pair may be the solution. Shielded cables can also help to extend the maximum distance of the cables.
Shielded twisted pair cable is available in three different configurations:
  1. Each pair of wires is individually shielded with foil.
  2. There is a foil or braid shield inside the jacket covering all wires (as a group).
  3. There is a shield around each individual pair, as well as around the entire group of wires (referred to as double shield twisted pair).

Coaxial Cable

Coaxial cabling has a single copper conductor at its center. A plastic layer provides insulation between the center conductor and a braided metal shield (See fig. 3). The metal shield helps to block any outside interference from fluorescent lights, motors, and other computers.

Although coaxial cabling is difficult to install, it is highly resistant to signal interference. In addition, it can support greater cable lengths between network devices than twisted pair cable. The two types of coaxial cabling are thick coaxial and thin coaxial.
Thin coaxial cable is also referred to as thinnet. 10Base2 refers to the specifications for thin coaxial cable carrying Ethernet signals. The 2 refers to the approximate maximum segment length being 200 meters. In actual fact the maximum segment length is 185 meters. Thin coaxial cable has been popular in school networks, especially linear bus networks.
Thick coaxial cable is also referred to as thicknet. 10Base5 refers to the specifications for thick coaxial cable carrying Ethernet signals. The 5 refers to the maximum segment length being 500 meters. Thick coaxial cable has an extra protective plastic cover that helps keep moisture away from the center conductor. This makes thick coaxial a great choice when running longer lengths in a linear bus network. One disadvantage of thick coaxial is that it does not bend easily and is difficult to install.

Coaxial Cable Connectors

The most common type of connector used with coaxial cables is the Bayone-Neill-Concelman (BNC) connector (See fig. 4). Different types of adapters are available for BNC connectors, including a T-connector, barrel connector, and terminator. Connectors on the cable are the weakest points in any network. To help avoid problems with your network, always use the BNC connectors that crimp, rather screw, onto the cable.

 

Fiber Optic Cable

Fiber optic cabling consists of a center glass core surrounded by several layers of protective materials (See fig. 5). It transmits light rather than electronic signals eliminating the problem of electrical interference. This makes it ideal for certain environments that contain a large amount of electrical interference. It has also made it the standard for connecting networks between buildings, due to its immunity to the effects of moisture and lighting.
Fiber optic cable has the ability to transmit signals over much longer distances than coaxial and twisted pair. It also has the capability to carry information at vastly greater speeds. This capacity broadens communication possibilities to include services such as video conferencing and interactive services. The cost of fiber optic cabling is comparable to copper cabling; however, it is more difficult to install and modify. 10BaseF refers to the specifications for fiber optic cable carrying Ethernet signals.
The center core of fiber cables is made from glass or plastic fibers (see fig 5). A plastic coating then cushions the fiber center, and kevlar fibers help to strengthen the cables and prevent breakage. The outer insulating jacket made of teflon or PVC.


There are two common types of fiber cables -- single mode and multimode. Multimode cable has a larger diameter; however, both cables provide high bandwidth at high speeds. Single mode can provide more distance, but it is more expensive.
Specification
Cable Type
10BaseT
Unshielded Twisted Pair
10Base2
Thin Coaxial
10Base5
Thick Coaxial
100BaseT
Unshielded Twisted Pair
100BaseFX
Fiber Optic
100BaseBX
Single mode Fiber
100BaseSX
Multimode Fiber
1000BaseT
Unshielded Twisted Pair
1000BaseFX
Fiber Optic
1000BaseBX
Single mode Fiber
1000BaseSX
Multimode Fiber


XII.  Computer Networking

A computer network, or simply a network, is a collection of computers and network hardware interconnected by communication channels that allow sharing of resources and information.[1] When a process in one device is able to exchange information with a process in another device, the two devices are said to be networked. A network is a group of devices connected to each other. Networks may be classified by the following characteristics: the media used to transmit signals, the communications protocols used to organize network traffic, network scale, network topology, benefits, and organizational scope.
Communication protocols define the rules and data formats for exchanging information in a computer network. Well-known communications protocols include Ethernet, a hardware and link layer standard that is widely used for local area networks, and the Internet protocol suite (TCP/IP), which defines a set of protocols for communication between multiple networks, for host-to-host data transfer, and for application-specific data transmission formats. Protocols provide the basis for network programming.
Computer networking can be considered a branch of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.

XIII. Network Topology
Network topology is the arrangement of the various elements (links, nodes, etc.) of a computer[1][2] or biological network.[3] Essentially, it is the topological[4] structure of a network, and may be depicted physically or logically. Physical topology refers to the placement of the network's various components, including device location and cable installation, while logical topology shows how data flows within a network, regardless of its physical design. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ between two networks, yet their topologies may be identical.
A good example is a local area network (LAN): Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. Conversely, mapping the data flow between the components determines the logical topology of the network.

XIV. Data/Resource Sharing

DATA SHARING

Data sharing is the practice of making data used for scholarly research available to other investigators. Replication has a long history in science. The motto of The Royal Society is 'Nullius in verba', translated "Take no man's word for it." Many funding agencies, institutions, and publication venues have policies regarding data sharing because transparency and openness are considered by many to be part of the scientific method.

A number of funding agencies and science journals require authors of peer-reviewed papers to share any supplemental information (raw data, statistical methods or source code) necessary to understand develop or reproduce published research. A great deal of scientific research is not subject to data sharing requirements, and many of these policies have liberal exceptions. In the absence of any binding requirement, data sharing is at the discretion of the scientists themselves. In addition, in certain situations agencies and institutions prohibit or severely limit data sharing to protect proprietary interests, national security, and subject/patient/victim confidentiality. Data sharing may also be restricted to protect institutions and scientists from use of data for political purposes.

Data and methods may be requested from an author years after publication. In order to encourage data sharing and prevent the loss or corruption of data, a number of funding agencies and journals established policies on data archiving. Access to publicly archived data is a recent development in the history of science made possible by technological advances in communications and information technology.

Shared Resource      
                                                                 
In computing, a shared resource or network share is a device or piece of information on a computer that can be remotely accessed from another computer, typically via a local area network or an enterprise Intranet, transparently as if it were a resource in the local machine.

Examples are shared file access (also known as disk sharing and folder sharing), shared printer access (printer sharing), shared scanner access, etc. The shared resource is called a shared disk (also known as mounted disk), shared drive volume, shared folder, shared file, shared document, shared printer or shared scanner.





XV. Latest Gadgets

Lenovo IdeaPhone K900

We're getting a bit bored of all these 5-inch, 1080p mobile phones pouring out of CES 2013, so why are we including yet another one from Lenovo in our hottest gadgets at CES 2013 list? The interesting point about the Lenovo IdeaPhone K900 is that it runs not on an ARM chip but an Intel Atom CPU.
So far, Intel phone seem to have been largely mid-rangers with ideas of budget and focusing on long battery life. The K900, however, is intended as a flagship. It's got a 2.0 GHz CPU, it's just 6.9mm thick and it's got some kind of sexy, brushed metal chassis. No word on whether it's intended for the global market just yet. Lenovo IdeaPhone K900 pictures

CardNinja

Sometimes the small things are the best. CardNinja is a simple yet effective wallet replacement - or more of a card pocket really - that can stick on the back of most smartphones or cases. We know you're probably already thinking that your cards will fall out, but CardNinja has a sensei-like grip that can hold in one card or a whole bunch.
We can see a lot of practical use for the CardNinja: Subway or Oyster cards needn't go missing again; bulging wallets needn't be seen protruding from back pockets; or when you've got those dancing shoes on it offers a practical way to cut down on what you're carrying. CardNinja pictures

Acer B1-A71

The Acer Iconia B1 is not a particularly jazzy tablet. What makes it special is that it will retail for £99 and it appears to be usable. The B1 comes with Android Jelly Bean, a 1.2GHz dual-core processor, 512MB of RAM and 8GB of storage. Sensibly, there’s also microSD card storage too - are you paying attention Google?
The 2700mAh battery is something of a concern and there's no 3G but then there isn't a lot of perfectly good tablets which cost 50 per cent more. The proof of the pudding with this one is going to be whether or not there's any lag. There shouldn't be, but we'll see. Acer B1-A71 pictures

iRobot Looj

The iRobo Looj will sweep your gutters clean of debris, and it’s heading for the UK soon. The robotic cleaner specialists have been selling this gutter cleaner in the US for a few months now, but having surveyed the gutters of the UK and Europe, the company has made some tweaks and got the little leaf cleaner set up for our homes.
The idea behind the Looj is that it saves you from the relative danger of shinning up a ladder and brushing leaves out for yourself. Of course, it can’t fly, so you do need to shin up a ladder to put it in the gutter in the first place. And, it will need help accessing the various gutters around your house, because it’s too big to get around the corners. iRobot Looj pictures

ZTE Grand S

The Sony Xperia S might have got the drop on the 5-inch 1080p phone market at CES 2013 but the ZTE Grand S looks like a hot product all the same. You get that same 443 ppi screen and a 1.7 GHz quad-core Qualcomm S4 Pro chip, which certainly sits at the top end of the mobile spectrum.
The phone itself is One X thin at 142 x 69 x 6.9mm and in all honesty feels very similar also, albeit a lot thinner. There's also the requisite 13-megapixel camera and LTE. The only slight concern is the apparently small 1780 mAh battery. ZTE Grand S pictures

Panasonic 4K OLED TV

Why have UHD or OLED technology in your telly when you can have both? That's the plan behind this joint venture between Panasonic and Sony which has produced this 56-inch 4K OLED TV prototype at CES 2013.
Panasonic says that what makes its OLED panel better than any others is the printing technology used to make the display. This method is said to dramatically reduce costs of production, increase yield rates and make the displays more reliable. Panasonic also claims that its printing process scales very well. So there’s no reason you can’t print both 24 and 56-inch screens using the same technology. Panasonic 4K OLED TV pictures

CST-01 E-ink watch

The CST-01 is the "world's thinnest watch". Using E Ink technology for the display, the screen is just 0.5mm thick and encased in a single piece of flexible stainless steel - making the whole watch 0.80mm in total and weight no more than 12g.
It looks essentially like a bangle or bracelet (depending on your location for the correct naming convention) and is currently making waves on Kickstarter, having made $36,690 of its $200,000 goal in just a matter of days. You can check out the CST-01 E Ink watch on the company's Kickstarter pageand even pledge some money to get one on release. CST-01 watch pictures

Huawei Ascend Mate

Sadly for Huawei, the Huawei Ascend D2 seems to have been trumped by the Sony Xperia Z listed below but there's currently no equal to the largest phablet around which is the 6.1-inch Huawei Ascend Mate. There's only a 720p screen this time but the 240ppi density still makes it look sharp.
Keeping the display ticking over is a 4050mAh battery, which should see the device through the day, and a 1.5GHz quad-core chip which, in our brief hands-on session. Software-wise there's Android 4.1 Jelly Bean and Huawei's own Emotion UI, which from the looks of it isn't hugely interfering. Huawei Ascend Mate pictures

Kingston HyperX 1TB pen drive

Kingston Technology has announced it is launching a pocket size 1TB flash drive for those who need a lot of storage in a small casing, and always with them. Not content with being the biggest storage option for a flash USB drive from Kingston, the DataTraveler HyperX Predator 3.0 USB Flash drive also promises to be the fastest USB 3.0 Flash drive in the Kingston family, with speeds of up to 240MB/s read and 160MB/s write.
Coming in a 512GB storage size from today, the 1TB version will be available before March, the company has promised. The new drive also comes with a keyring attachment so you can carry it on your keys. No word on pricing, but expect it to be expensive. Read more | Kingston HyperX Predator pictures

Asus VivoTab ME400

Asus is hoping to replicate the success of the Nexus 7 for Windows 8 users with a new Windows 8 tablet - the VivoTab ME400. Promised in UK stores for £499 by the end of the month, the new tablet will be powered by an Intel Atom Z2760 Dual Core 1.8Ghz processor and come with 2GB of RAM. Coming in two flavours, there will be 64GB and 32GB storage options for the 10.1-inch 16:9 1366 x 768 resolution ratio screen.
The VivoTab will pack a rear-facing 8-megapixel camera with flash and 1080p video recording and a - megapixel camera for video chats around the front. Where Asus is hoping the VivoTab ME400 will be a success is that it will give you the full power of Windows 8 rather than Windows RT as found on the Microsoft Surface, and promises 9.5 hours of battery life from a single charge.Asus VivoTab ME400 pictures

Panasonic Lumix DMC-TZ40

Panasonic has announced a new flagship travel zoom camera at CES in Las Vegas that's packed with bundles of new features and tech including NFC to help make transferring pictures to other devices incredibly easy. It has a 20x optical zoom 24mm Ultra Wide Angle LEICA DC lens, an 18.1-megapixel sensor and a multi-touch LCD 920K resolution touchscreen display with pinch to zoom.
On the shooting side, the camera promises Light Speed AF, which Panasonic says achieves focus in just 0.1 seconds. There is also 10fps continuous shooting at the touch of a button, so you can capture and review a selection of shots, choosing the one you like best. And the NFC? Tap it against an Android phone with NFC running the Panasonic app and you'll be able to share pictures via a direct Wi-Fi connection. Panasonic Lumix TZ40 pictures

Sony Xperia Z

The Sony Xperia Z phablet is a mammoth 7.9mm thick, 5-inch beast. It weighs 146 grams, and comes with a Full HD 1080p Reality Display with Mobile Bravia Engine 2 that builds on the technology found in the company's 2012 range of phones.
But it's not just about a big screen. There is power behind the Xperia Z too in the guise of the latest Qualcomm Snapdragon processor - the 1.5GHz asynchronous quad-core Snapdragon S4 processor with 2GB RAM - a 13-megapixel fast-capture camera, 4G LTE, NFC, and a 2400mAh battery which, with additional help from a new software feature from Sony, should give you more than nine days of battery life. Sony Xperia Z pictures

Nvidia Shield

Nvidia has come up with something genuinely exciting in the gaming world with the announcement of Project Shield, a totally Android-based games console that looks like an all-in-one game controller. The real trick of the Shield though is its interaction with the PC. Nvidia has turned the controller into a media streamer that lets you play games from your desktop on your TV.
So your desktop streams the PC game over Wi-Fi to the controller, which in turn plays it on your TV or the attached 5-inch screen. The controller even runs Steam. Think of it like a Wii U for the PC gamer, making it finally possible to easily hook up your PC to your big screen TV in the living room. Read more | Nvidia Project Shield pictures

Lenovo IdeaCentre Horizon

There are times when computer manufacturers go a bit bonkers just because they can. The Lenovo IdeaCentre Horizon falls into one such category. It's a 27-inch touchscreen "tabletop" computer. It's so big that, when we first saw the Lenovo Horizon, it had people gathered around it playing "virtual" air hockey against one another.
The Horizon comes with four removable controllers that can be placed on the screen's surface and used like joysticks. It's largely a wired solution but does feature a battery that will last two hours - just enough for a game in the house. We don't suppose anyone will be popping it in their backpack. Lenovo IdeaCentre Horizon pictures

Fujifilm X100S

When Fujifilm unveiled its retro-styled, high-end X100 compact camera two years ago, it was received with rave reviews for the most part. Its autofocus speed, however, wasn't lightning fast - something the Fujifilm X100S, the updated follow-up to the original compact, looks to stamp out. How so? With the world’s fastest autofocus speed of just of 0.08 seconds, no less.
The X100S comes complete with the same 23mm f/2.0 Fujinon fixed prime lens and body styling as its predecessor, but adds two key new features: a newly developed 16.3-megapixel APS-C-sized X-Trans CMOS II sensor with updated processor to match, plus a new, higher-definition hybrid viewfinder system. Read more | Fuji X100S pictures

Samsung S9 4K TV

Samsung has proudly announced that one of the centrepieces for its CES presence will be an 85-inch Ultra High Definition TV. This comes after the set was awarded a CES 2013 Best of Innovation award and, understandably, the company is happy to cry out from the rooftops about it.
This LED panel boasts lifelike picture quality in ultra HD resolution with more than 8 million pixels, four times the resolution of Full HD displays. It uses an innovative enhanced dimming technology and a very high contrast ratio to deliver deep, real blacks and pure whites for greater detail. It will be unveiled at the manufacturer's CES press conference on 7 January and Pocket-lint will be in attendance to tell you exactly what we think. Samsung S9 pictures



References