August 13, 2010

Microprocessors

Processor, or CPU (Central Processing Unit), is the component of the computer running the computer programs. With the memory in particular, is one of the components that have existed since the first computers and that are present in all computers. A processor built into a single integrated circuit is a microprocessor.

The processors were designed specifically early for a computer to a given type. This costly method of designing processors for a specific application has led to the development of mass production of processors that are suitable for one or more uses. This standardization trend generally began in the area of mainframe computers (mainframes discrete transistor and minicomputers) has accelerated rapidly with the advent of integrated circuits. The IC has allowed increasingly complex CPUs. The miniaturization and standardization of CPUs have increased their distribution in modern life far beyond the use of dedicated computing machines.

Microprocessors

The introduction of the microprocessor in the 1970s marked a significant design and implementation of CPUs. Since introducing the first microprocessor (Intel 4004) in 1971 and the first widely used microprocessor (Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other methods of implementing CPU. Manufacturers of mainframes (mainframe and minicomputers) time have started their own programs of development of integrated circuits to upgrade the old architecture of their computers and subsequently produces microprocessors instruction set compatible ensuring backward compatibility with their older models. Previous generations of CPUs contained an assembly of many discrete components and integrated circuits low on one or more electronic cards. Microprocessors are built with a very small number of highly integrated circuits (ULSI), usually one. Microprocessors are implemented on a single chip, so small in size, which means the shortest switching time related to physical factors such as reduced parasitic capacitance of the gates. This has allowed synchronous microprocessors to increase their base frequency of a few tens of megahertz to several gigahertz. Moreover, as the ability to manufacture extremely small transistors on an integrated circuit has increased the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's Law, which proved to be far enough out to be predictive of the increasing complexity of processors (and any other integrated circuit).

The heart multicore processors (multicore) lately now include multiple cores in a single integrated circuit, their effectiveness depends greatly on the topology of interconnection between the cores. New approaches such as overlay memory and heart of the processor (memory stacking) are being studied and should lead to a further increase performance. Based on the trends of the last 10 years, processor performance should reach the petaflop, 2010 for servers and PCs in 2030.

Beginning in June 2008 the military supercomputer IBM Roadrunner is the first to cross the symbolic threshold of one petaflop. Then in November 2008 it was the turn of the Jaguar Cray supercomputer. In April 2009 they are the only two supercomputers to have passed the petaflop.

While the complexity, size, construction, and the general form of CPUs have changed considerably over the last sixty years, the design and basic functions have not changed much. Almost all common CPUs today can be described very precisely as machines stored program von Neumann. While Moore's law, mentioned above, continues to be tested, questions have arisen about the limits of the technology of integrated circuit transistor. The miniaturization of electronic gates is so important that the effects of phenomena such as electromigration (progressive degradation of metallic interconnects resulting in decreased reliability of integrated circuits) and leakage currents (their importance increases with downsizing of integrated circuits They are the source of electrical energy punitive), previously negligible, become increasingly significant. These new issues are among the many factors leading researchers to investigate the one hand, new treatment technologies such as quantum computers use parallel computing, and on the other hand, other methods of use classical model of von Neumann.

Operation

Composition of a processor

The essential parts of a processor are: Arithmetic Logic Unit (ALU, English and Arithmetic Logical Unit - ALU), which supports basic arithmetic calculations and tests; control unit or sequencer, which synchronizes the various components of the processor. In particular, it initializes the registers at the start of the machine and it handles interruptions; records, which are memories of small size (few bytes), fast enough that the ALU can manipulate their content to each clock cycle. A number of records are common to most processors.

Program counter: This register contains the memory address of the instruction during execution; accumulator register is used to store data being processed by the ALU; register address, so it always contains the address of the next information to be read by the ALU, is the result of the current instruction or the next instruction; instruction register: it contains the instruction being processed.

Register state: it is used to store the processor context, which means that different bits of this register are the flags (flags) used to store information regarding the outcome of the last instruction executed; stack pointers: this type of register, whose number varies depending on processor type, contains the address of the stack (or stacks); general registers: These registers are available for calculations; clock that synchronizes all actions of the CPU. It is present in the processors, synchronous and asynchronous absent processors and processors autosynchrones. Unit O, which supports communication with the computer's memory or transmit the orders to fly for his dedicated processors, allowing the processor to access devices in the computer.

The current processors also include more complex elements: several UAL, which can process several instructions simultaneously. The superscalar architecture, in particular, allow access to UAL in parallel, each ALU can execute an instruction without the other; pipeline architecture allows to cut salaries to make temporally. This technique comes from the world of supercomputing; a prediction unit jump, which allows the processor to anticipate a jump in the course of a program, to avoid waiting for the final value of the jump address.

That helps fill the pipeline; a unit of floating-point (Floating Point Unit - FPU), which accelerates the computations on real numbers encoded in floating point; the cache, which speeds up processing by reducing the access time to memory. These buffers are much faster than RAM and slow unless the CPU. The instruction cache receives the next instruction to be executed, the cache data manipulates data. Sometimes, a single unified cache is used for code and data. Multiple levels of caches can coexist, they are often called by the names of L1, L2 or L3. In advanced processors, special units of the processor are assigned to research by statistical and / or predictive of future access to main memory.

A processor is defined by: the breadth of its internal records of data manipulation (8, 16, 32, 64, 128) bits; the rate of its clock in MHz (mega hertz) or GHz (giga hertz); the number of computing cores (core); its instruction set (ISA English Instruction Set Architecture) depending on the family (CISC, RISC, etc.); its fine engraving expressed in nm (nanometers) and its microarchitecture.

But what characterizes a processor is mainly the family to which it belongs:
CISC (Complex Instruction Set Computer: choice of instructions as close as possible to a high level language);
RISC (Reduced Instruction Set Computer: choice of simpler instructions and a structure for fast execution); VLIW (Very Long Instruction Word); DSP (Digital Signal Processor). Although the latter family (DSP) is relatively specific. Indeed, a processor is a programmable component and is therefore a priori capable of performing any type of program. However, for the sake of optimization, specialized processors are designed and adapted to certain types of calculations (3D, sound, etc..). DSPs are specialized processors for calculations related to signal processing. For example, it is not uncommon to see implementing Fourier Transforms in a DSP.

A processor has three types of buses: a data bus, sets the size of data manipulated (regardless of the size of internal registers); an address bus determines the number of memory slots available; a control bus defines the management processor IRQ, RESET, etc..

The operations of the processor

The role of most CPUs, regardless of physical form they take, is to run a series of stored instructions called "program".

The instructions (sometimes broken down into micro instructions) and data transmitted to the processor are expressed in words binary (machine code). They are usually stored in memory. The sequencer directs the reading of memory contents and the formation of words presented to the ALU which interprets them.

A set of instructions and data is a program.

The language closest to the machine code while remaining readable by humans is the assembly language, also called assembly language (Anglicized form of the English word together). However, IT has developed a range of languages, called high-level (such as BASIC, Pascal, C, C + +, Fortran, Ada, etc.) designed to simplify the writing of programs.

The operations described here are consistent with the von Neumann architecture. The program is represented by a series of instructions that perform operations on the RAM of the computer. There are four steps that nearly all von Neumann architectures use:

fetch - Search instruction;
decode - decode the instruction (operation and operands);
execute - execution of the operation;
writeback - writing the result.

The first stage, FETCH (search) is to seek an instruction in the memory of the computer. The location in memory is determined by the program counter (PC), which stores the address of the next instruction in the program memory. After an instruction has been sought, the PC is incremented by the length of instruction word. In the case of simple constant word length is always the same number. For example, a 32-bit word length constant that uses words of 8 bits of memory always increment the PC by 4 (except in the case of jumps). The instruction set that uses variable length instructions as x86, increment the PC by the number of memory words corresponding to the last instruction length. In addition, in central processing units more complex, incrementing the PC does not necessarily occur at the end of the instruction execution. This is particularly the case in highly parallelized and superscalar architectures. Often, finding the statement must be made in slow memory, slowing down the CPU awaits trial. This issue is largely resolved by modern processors use caches and pipeline architectures.
The instruction that the processor memory search is used to determine what the CPU should do. In step DECODE (decoding), the instruction is split into several parts such that they can be used by other parts of the processor. The way the value of the instruction is interpreted is defined by the instruction set (ISA) processor 1. Often, part of an instruction, called opcode (operation code), indicates which operation is done, for example an addition. The remaining parts of the instruction typically include other information necessary for the execution of the instruction as examples by the operands of the addition. These operands can take a constant value, called immediate value, or contain the location where to meet (in a register or memory address) the value of the operand, depending on addressing mode used. In the old conceptions, the parts of the processor responsible for decoding were fixed and unchangeable because they were coded in the circuits. In the latest processors, firmware is often used to translate the instructions into different orders. This firmware is sometimes modified to change the way the CPU decodes instructions even after its manufacture.

After the stages of research and decoding reaches the EXECUTE stage (execution) of instruction. During this phase, different parts of the processor are correlated to achieve the desired operation. For example, for an addition, the arithmetic logic unit (ALU) is connected to the inputs and outputs. The entries show the numbers to be added and outputs contain the final sum. The ALU contains the circuitry to perform arithmetic and logic on single inputs (adding operation on the bits). If the result of addition is too large to be encoded by the processor, an overflow signal is set in a status register (see below chapter on the coding of numbers).

The last step writeback (write result), simply writing the results of the execution stage in memory. Very often, the results are written in a register internal to the processor to take advantage of very short access time for instructions. In other cases, the results are written more slowly in RAM, so cheaply and accepting encodings larger numbers.

Some types of instructions manipulate the program counter rather than directly produce result data. These instructions are called jumps (jumps) and can perform loops (loops), conditional execution of programs or functions (subroutines) in programs 2. Many instructions also serve to change the status of flags (flags) in a status register. These statements can be used to condition the behavior of a program, since they often indicate the end of execution of various operations. For example, an instruction for comparing two numbers will place a flag in a status register as the result of the comparison. This flag can then be reused by a jump instruction to continue the program flow.

After executing the instruction and writing results, the whole process is repeated, the next instruction cycle research the following instruction sequence as the program counter was incremented. If the previous instruction was a jump is the jump destination address is registered in the program counter. In more complex processors, multiple instructions can be searched, decoded and executed simultaneously, this is called a pipelined architecture, now commonly used in electronic equipment.

Processing Speed

The processing speed of a processor is still sometimes expressed in MIPS (million instructions per second) or megaflops (million floating-point operations per second) for the floating part, called FPU (Floating Point Unit). Yet today, the processors are based on different architectures and parallelization techniques treatments that do more than simply determining their performance. Specific programs for performance evaluation (benchmarks) have been developed to obtain comparative execution time of real programs.

Design and Implementation

The coding of numbers

The way a CPU represents numbers is a design choice that affects how deep its basic operation. Some older computers used an electrical model of the decimal number system (base 10). Some have chosen to digital systems more exotic systems like ternary (base 3). Modern processors represent numbers in binary (base 2) in which each number is represented by a physical quantity that can take only two values as a voltage "high" or "low."

The physical concept of voltage is analog in nature because it can take an infinite number of values. For the purpose of physical representation of binary numbers, the values of voltages are defined as states 1 and 0. These states result from the operational parameters of the switching elements making up the processor as the threshold levels of transistors.

In addition to the system of representation of numbers, it must consider the size and precision of numbers a processor can handle. In the case of a binary processor, a "bit" corresponds to a specific position in numbers that the processor can handle. The number of bits (digits) that a CPU uses to represent numbers is often called "word size (word size, bit width, data path width) or" full precision "when dealing with integers ( as opposed to floating point numbers). This number differs between architectures, and often along the various modules of a single processor. For example, an 8-bit CPU manages numbers that can be represented by 8 binary digits (each digit which can take 2 values), or 28 or 256 discrete values. Accordingly, the size of the integer set a limit to the range of integers the software run by the processor can use.

The size of the whole number also affects the number of memory locations the CPU can address (locate). For example, if a bit processor uses 32 bits to represent a memory address and that each memory address is represented by one byte (8 bits), the maximum memory size that can be addressed by this processor is 232 bytes, or 4GB It's a very simplistic to the address space of a processor and many designs use different types of route statements much more complex, such as pagination, to address more memory than the size of their whole number would with a flat address space.

Larger ranges of integers require more than basic structures to manage the additional digits, leading to more complexity, larger size, more energy consumption and higher costs. It is not uncommon to find 4-bit microcontrollers or 8-bit in modern applications, even if 16-bit processors, 32-bit, 64-bit and even 128-bit are available. To get the benefit of both sizes of short and long over, many CPUs are designed with different widths in different parts of the whole component. For example, the IBM System/370 has a native 32-bit CPU but uses a floating point unit (FPU) 128-bit precision to achieve greater precision in the calculations with floating point numbers . Many of the latest processors use a combination of comparable size numbers, especially when the processor is dedicated to a general purpose for which it is necessary to find the right balance between the capacity to deal with integer and floating point numbers.

The clock signal

Most processors, and more generally most sequential logic circuits, operate in a synchronous nature. This means they are designed and operate at a rate of a signal synchronization. This signal is the clock signal. It often takes the form of a periodic square wave. By calculating the maximum time it takes the electrical signal to propagate in the different branches of the processor circuitry, the designer can select the appropriate period of the clock signal.
This period must be greater than the time it takes the signal to propagate in the worst case. In determining the period of the clock to a value well above the worst case propagation time, it is possible to design the entire CPU and how it moves data around the "fronts" rising or falling signal clock. This has the advantage of simplifying the CPU significantly both in terms of its design from that of many of its components. By cons, this has the disadvantage of slowing the processor must adjust its speed as that of its slowest component, although other parts are much faster. These limitations are largely offset by various methods of increasing CPU parallelism.

Architectural improvements can not alone solve all the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to delays as all other electrical signals. The clock frequencies higher than found in processors to create increasingly complex challenges to keep the clock signal in phase (synchronized) throughout the CPU. Consequently, many processors now require the provision of multiple identical clock signals to avoid the delay of one signal can cause a malfunction of the processor. The high amount of heat must be dissipated by the processor is another major problem due to increasing clock frequencies. The frequent changes of state of the clock are switching a large number of components, whether or not used at this time. In general, components that switch use more energy than those who remain in a static state. and, most clock frequencies increase and more heat dissipation does the same, so that processors require more efficient cooling solutions.

The method of clock gating is used to manage the involuntary switching of components in inhibiting the clock signal on selected items, but this practice is difficult to implement and is dedicated to the needs of very low power circuits.

Another method is to disable the global clock signal, power consumption and heat dissipation are reduced but the circuit design becomes more complex. Some designs have been achieved without global clock signal, such as families of processors ARM or MIPS, others show that asynchronous parts such as the use of an asynchronous ALU superscalar pipelining to achieve performance gains in arithmetic. It is not certain that a fully asynchronous processor can deliver performance levels comparable or higher processor synchronous while it is obvious it will be better in the math simple, it will be rather restricted to embedded applications (handheld computers, game consoles ...).

Parallelism


Model subscalaire processor: it takes 15 cycles to execute three instructions.
The description of the basic operating mode of a processor made in the previous chapter presents the simplest form that can take a CPU. This type of processor, called subscalaire, executes an instruction on one or two data fields at once.
This process is inefficient and inherent subscalaires processors. Since only one instruction is executed at a time while the processor waits until the processing of this instruction before turning to the next with the result that the CPU is frozen on the instructions that require more than one cycle clock to run. The addition of a second processing unit (see below), does not significantly improve performance, it is no longer a processing unit that is fixed but two, further increasing the number of transistors Unused. This design, in which the CPU execution resources handle only one instruction at a time can reach only scalar performance (one instruction per clock cycle) or subscalaires (less than one instruction per clock cycle).

In attempting to obtain performance scalar and beyond, it has led to various methods that drive the CPU has a less linear and more parallel. When we talk about parallel processor, two terms are used to classify these design techniques:

Instruction Level Parallelism (ILP) - Parallel at trial;
Thread Level Parallelism (TLP) - Parallel to the thread level (group instruction).

The ILP seeks to increase the speed at which instructions are executed by a CPU (that is to say to increase the use of enforcement resources present in the integrated circuit). The objective of the TLP is to increase the number of threads that the CPU can execute simultaneously. Each method differs from the other one hand, by the way in which it is implemented and secondly, because of their relative effectiveness in increasing processor performance for an application.

ILP: Instruction pipelining and superscalar


Pipeline Based on 5 floors. In the best scenario, this pipeline can sustain an execution rate of one instruction per cycle.

One of the simplest ways to increase the parallelism is to start early research stages (fetch) and decoding (decode) a statement before the end of execution of the previous instruction. This is the simplest form of pipelining technique, it is used in most modern processors unskilled. Pipelining allows you to run more than one instruction at a time by breaking the path of execution in different stages. This division can be compared to an assembly line.

Pipelining can create conflicts of data dependency, where the result of the previous operation is necessary to perform the following operation. To resolve this problem, special care must be taken to check this type of situation and delay, if any, part of the instruction pipeline. Naturally, the additional circuitry to provide for this additional complexity of parallel processors.

August 9, 2010

Change Management

"It ought to be remembered there is nothing more difficult to treat, no more dubia to succeed, nor more dangerous to handle, than to be led to introduce new orders. Because the innovator has for enemies all those who make the discipline of old well, and has tepid defenders in all those models would be good for new orders. This coolness arises partly from fear of the opponents, who have the laws on their side, partly from the incredulity of men who will not believe in new things unless they have had a strong experience." 
Niccolo Machiavelli


Change is inevitable. The world is constantly changing, for the better or worst. Since change is inevitable, many studies abouth the causes and effects of change are conducted. 

The term Change Management refers to a structured approach to change the individuals, groups, organizations and companies from current condition into the desirable future conditions.

Paradigm of Change Management 

In proffessional contexct, the word "change" is often used as a synonym of transition but has a more general meaning, while the word transition is used as scientific concept. In genetics, for example, the word "transition" is used as a type of mutation and natural change process system as indicates in the transition from one state to another.

In Change Management, the prevailing questions are: "Where are we right now?, Where are we going is we are keeping the present state of condition? Where do we want to go? How do we get there? What must we change from the present condition to really get to the desirable destination?

In this context, the transition refers to a broad set of phenomena. From the individual point of view, transition can be considered a new attitude to be acquired or behavioral change. 

From the perspective of an organization, transition can be represented by a new type of technology or by acquiring by a new set of processes to implement, taking new kind of action, or by spreading internal and external cultural leap. Transitional change must be manage to ensure  that organization can achieve its goals in better way. 

Transformation is a greater and the more profound change that need greater effort and attention to direct the organization towards highest value goals.

From the perspective of a society or social structure, transition can be associated with a new political project, creating new legislation, the introduction of a new cultural model, and so on. 

Business organization cannot control individual's activities of in respect of their working time. Transformation can take place only through active participation and strong motivation of the the individuals to achieve corporate objectives. Corporate transformation can take place if there are clear strategies and strong participation and motivation of those involved. 

Culture and existing practices of change management and provide an overview of the tools to govern the impact of transformation on the people involved and, conversely, help people to orient themselves and move through the changes of the surrounding world that turns . In this regard, recent research highlight the need for an efficient combination of organizational change management tools and models of individual change management. 

The theories of change management have evolved from psychology, from the economic-commercial and engineering management. Thus, some theories are derived from models of organizational development while others are based on models of individual and social behavior. 

Planning, Organizing, Controlling and Coordinating,

Management is an interdicipline body of knowledge associated with bureaucracy, public relations, skills development, business strategies, product development, equity, quality assurance, and personnel, outsourcing, benchmarking, portfolio, franchising, and group dynamics.  

Management, practically defined, is the process of defining corporate objectives through various kinds of resources, especially human resources, the acts of getting the corporate jobs done through people.

The responsibilities of managers can include coordinating the human resources to achieve high value corporate objectives, making effective decision, and creating both short term and long term action plan to ensure the achievement of corporate goals to satisfy the both customers and stakeholders.

There are four components of management:

Planning: Manager must make the most of every available resouces by creating effective execution plan about how the company is organized to achieve the predetermined high value corporate objectives.

Organizing: Manager must organized the human resources to get the corporate jobs done through delegation, empewerment, training, team work, leadership, system creation and other crucial business aspects.

Controling: The company must fuction in optimum levels toward the achievement of the desirable objectives, discarding lower value activities and concentrating on higher value activities to ensure the optimum results in the use of rare resources such as time, money, space, market shares.

Coordinating: Manager must ensure aimed the alignment and harmonization of the contributions of various components of the organization through
control and system design to ensure that the activities and processes of the organization are conducted in accordance with the corporate rules and objectives. 


August 3, 2010

Computer System

A computer is a digital device that use algorithm to process information. Originally, the word computer used to designate someone with complicated calculations performed with or without mechanical aids. Modern computers are used for more than just mathematical applications. Many administrative and financial tasks are using the computer.

Today computer is mostly use to process information through internet application and entertainment. In modern production machinery, computer is used to control manufacturing processes such as in using robot to assembly of cars.

Modern science, in parallel with the development of modern computer, is growing in parallel. Miniaturization technology increases the speed computer operation and enhances hardware functionality that previously made to deploy software. The big advantage of such development is additional functionality in many sectors.

IBM introduced the first Personal Computer 1981 which was called IBM PC. This introduction was in line with previous initiatives, such as Altair 8800 and Apple II computers. The IBM-compatible PC was the standard in computer manufacturing. Many computer manufacturers created cheap cloned based on that standardized design. Now Personal Computer or PC is used daily in the lives of many people and play a vital role.

Computer Structure

Computer is consist of hardware and software. Operating system must be installed first for a computer to be useful. The core function of operating system is to manage the memory, sharing of processor time, managing the internal data transfer, the implementation of programs, and providing one or more input and output mechanisms. The operating system also provides the computer with a working environment where all facilities are made available. Operating system facilities use the data stored in permanent memory (hard disks) to manage and implement programs.

The separation between the functions of an operating system and the components of the application software layer is vague. Among application software means the software that is created or purchased to carry out specific functions for which the computer was purchased. Think of accounting software, word processors, CRM software, sales and payroll systems, but also to web servers, printer drivers and various other utilities.

Hardware

Internal hardware:

motherboard
processor
memory
hard drive
the disk or RAID controller
CD / DVD player and burner
USB controller
video controller
sound controller
network controller with any Wi-Fi and / or Bluetooth
internal modem
firewire controller

External Hardware:

monitor
keyboard
mouse
printer
scanner
external modem
speakers
microphone
webcam
USB

Under hardware means "all tangible items and the computer". A distinction is made between internal hardware and external hardware. Internal hardware is in the computer case. External hardware is connected to one of the ports on the computer.

Many hardware is made according to certain standards, particularly in the segment of the PC. Regularly replaced by a standard an improved version, for older equipment not always compatible with newer. This may be a reason for a computer completely replaced.

PC

A practical example of the PC: the mouse was in the 80's mostly connected to the serial port and the printer to the parallel port. Both ports could also be used to communicate with another computer. The connection for the mouse and keyboard are later replaced by the PS / 2 interfaces. By the end of the 90 mice were equipped with a USB connection. The printer, which in recent years greatly improved, is nowadays usually through a USB port, although some printers, even the parallel port can be connected. Mice on the serial port can be connected, are now a rarity. Communicate with other computers is now almost exclusively in networks, with again a limited number of standards.

Other Architectures

Other architectures than the PC (such as Sun SPARC, IBM RS/6000 and SGI), often have their own standards. These are obviously changing. Now we see increasingly that standards should be harmonized and equipment so that with almost all types of computers can operate. One example is the USB.

The preponderance of the Intel architecture also means that for many computer brands Intel is the de facto standard. Thus in 2006 the first Apple Macintosh in the market with an Intel x86-based architecture and the Intel architecture with SUN, HP and IBM to take the lead.

History of Computing

Mechanical Computers

The history of the computer begins with the history of the count. Historically, people have developed tools for calculations that are not easily head could be made, such as slate and abacus (abacus). When the need for more complex calculations it was developed with assistance data tables (eg, logarithmic tables to aid in the multiplication). The slide rule was invented to count easy.

If there are many, many people were expected to be deployed. These calculators hall was therefore indicated by the word computer. In the United Kingdom were in response to the colonial shipping centers with many human created computers. These tables were used for navigation could be used. In other areas, were snapped up these tables, such as astronomy.

Charles Babbage, a mathematician, wondered if the tables could be generated not by machine. For this he devised in 1822 the difference machine "(differential engine): a concept for a machine that could opt tables of polynomials. The machine worked mechanically and gear technology was not advanced enough to achieve a good result. Babbage still further changed the design of the machine.

Then he came in 1833 with the "analytical engine" (analytical engine). This machine would be input from punched cards to perform mathematical operations. This machine is widely seen as the concept of the computer, but was never built.

However, there are (still up in the second half of the twentieth century) many mechanical calculators constructed and used. One of the first draft (1645) was written by Blaise Pascal.

Only in 1938 the German physicist Konrad Zuse built the first computer, the Z1. Also Zuses machine still worked mechanically, but Zuse himself had made much easier through the binary system to use. A few years later, Zuse built the first fully functional electromechanical computer, the Z3.

Electronic Computers

The Second World War the development of computers a fast getaway. In the United Kingdom made use of Colossus to crack secret German codes, including those of the Enigma cipher. The Colossus was the first electronic computer, using electron tubes. The first computer was the ENIAC in the U.S., which took a few classrooms. The first computer in the Netherlands, the ARRA at the Mathematical Centre. The first computer in a commercial environment, the Miracle, a Ferranti Mark I at the Shell Laboratory in Amsterdam.

In the period that the permanent memory (hard disk) is not widely existed, entering data or programs in a computer very slowly. This was done initially with switches and punch a little later with punch cards, and an even later stage magnetic tape.

The computers in the years 1950-1980 were mainly mainframes: very large computers, where hundreds to thousands of users simultaneously could work. Especially banks and insurance companies using such large-scale mainframes. The mainframe was connected to the user through a simple application on a desktop computer (once called by a so-called dumb terminal).The mainframe with the advent of small computers is not entirely extinct and is still used by professional institutions. The most famous builder of IBM's mainframes.

Scaling

With the rapid development of electronics and semiconductors used in transistors, the computer could be much smaller and faster. Later, the transistors integrated into an integrated circuit. The microprocessor is one such integrated circuit. Although microprocessor-based computers like the Commodore PET (Personal Electronic Trans Actor) and the Apple II as early as the mid seventies rise did was the IBM PC in 1981 the first system to explicitly named personal computer was launched . The PC was becoming less expensive and easier to use making increasingly businesses and households have bought one. The developments are continuing, business people often use a laptop computer to their step. The increasing miniaturization leads to the small Personal Digital Assistant (PDA) with more possibilities comes into view. Many appliances such as washing machines, VCRs, digital cameras and now include such a computer for various affairs, they are usually an embedded system or - in English - called embedded systems.

Computer Applications

Nowadays computers at work often connected to a computer network, where multiple users own a PC using software and data to a central repository (server) are stored. To retrieve files from the Internet is usually a broadband connection and a very few cases have a modem [dial].Broadband connections are expensive in addition to many times faster than dial. An example of a broadband connection is: computer, a router, which is linked to a broadband Internet connection such as DSL, cable E1, T1 and fiber. In the case of a large computer network is often used a proxy server to the data from the internet "filter". An application of computers is rapidly expanding, is that of artificial intelligence, which is used in computer games and robotics.

Home computers are often used to play computer games, information via internet search, and communication through e-mail, instant messaging (a widely used program for this is Windows Live Messenger) and internet forums. Also call on the Internet is now emerging. A common application for this is Skype. The current generation of computers is also suitable for digital photo and video editing. Many people use the computer for correspondence, their administration or media center for playing music or viewing photos. In education, the computer for word processing and retrieval of information for homework assignments and reports as. More and more students use a laptop.

August 2, 2010

Microprocessor

A microprocessor is a processor whose components have been miniaturized sufficiently to be grouped in a single integrated circuit. Functionally, the processor is part of a computer that executes instructions and processes data programs.

Description

Until the early 1970s, various electronic components forming a processor that could fit on a single integrated circuit. It should therefore be placed on multiple integrated circuits. In 1971, the American company Intel succeeds, for the first time, placing all the transistors that constitute a processor on a single integrated circuit giving rise to microprocessor.

This miniaturization has enabled: increases the operating capacity of processors, by reducing the distances between components, among others; reduce costs through the replacement of several circuits with a single one among others; increase reliability, by deleting connections between components of the processor, it removes one of the main vectors of failure; computers to create much smaller: micro-computers; reduce energy consumption.

The main characteristics of a microprocessor are: The set of instructions it can execute. Examples of instructions that a microprocessor can perform: add two numbers, compare two numbers to determine if they are equal, compare two numbers to determine which is larger, multiply two numbers, A processor can execute tens or even hundreds or thousands of different instructions.

The complexity of its architecture. This complexity is measured by the number of transistors contained in the microprocessor. Over the microprocessor contains transistors, the more it can perform complex operations, and / or treating large numbers.

The number of bits that the processor can handle whole. The first microprocessors could handle more than 4 bits at once. They had to execute multiple instructions to add the numbers 32 or 64 bits. The current microprocessors (in 2007) can handle 64-bit numbers together. The number of bits is directly related to the ability to handle large numbers quickly, or numbers of high precision (number of decimal places).

Clock speed. The role of the clock is clocking the speed of the microprocessor work. The higher the clock speed increases, the microprocessor carries out instructions in a second.

All this is theoretical, in practice, as the processor architecture, the number of clock cycles to perform an elementary operation can vary from one cycle to several dozen per unit of execution (typically on a standard processor) . For example, processor A 400 MHz may be faster than him other B 1 GHz, depending on their respective architectures.

The combination of the above characteristics determines the power of the microprocessor. The power of a microprocessor is expressed in Millions of Instructions Per Second (MIPS). In the 1970s, microprocessors were doing less than one million instructions per second, the current processors (in 2007) can carry more than 10 billion instructions per second.

History

The microprocessor was invented by two engineers from Intel: Marcian Hoff (aka Ted Hoff) and Federico Faggin. Marcian Hoff made the architecture of the microprocessor (block architecture and a set of instructions) in 1969. Federico Faggin invented the microprocessor design (new design methodology for chip and logic, using for the first time on silicon gate technology developed by him in 1968 at Fairchild, circuit design and logic; new layout and several new technical solutions) in 1970. Federico Faggin also led the design of the first microprocessor to its market introduction in 19,714.

In 1990, Gilbert Hyatt has claimed the patent of the microprocessor based on a patent he had filed in 1970. Recognition of prior patent Hyatt would have enabled him to claim royalties on all microprocessors manufactured by the world. However, Hyatt's patent was invalidated in 1995 by the U.S. Patent Office, on the basis that the microprocessor is described in the patent application had not been done, and would not, could the be with the technology available at the time of filing the patent.

The first microprocessor market, November 15, 1971, is the Intel 4004 4-bit. It was followed by the Intel 8008. This microprocessor was originally used to manufacture integrated graphics in text mode, but considered too slow for the client who requested the design, it became a general purpose processor. These processors are the precursors of the Intel 8080, Zilog Z80, and future Intel x86 family.

The following table describes the main features of microprocessors manufactured by Intel, and shows their rapid evolution in both increasing the number of transistors in circuit miniaturization and increasing power. Keep in mind that if this table describes the evolution of Intel products, changes in competitors' products has followed more or less early or late the same course.

A computer program is, in essence, a stream of instructions executed by a processor. Each instruction requires several clock cycles, the instruction is executed in as many steps as necessary cycles. The sequential microprocessors running the following statement when they finish the current instruction. In the case of ILP, the microprocessor can process several instructions at the same clock cycle, provided that these do not mobilize different instructions simultaneously a single internal resource. In other words, the processor executes instructions in sequence, and are not dependent on one another, at various stages of completion. This queue is called future execution pipeline. This mechanism was first implemented in the 1960s by IBM.

The most advanced processors running at the same time as they have instructions to pipelines, provided that all instructions execute in parallel are not interdependent, that is to say that the outturn of each of them does not alter the conditions of implementation of one another.Processors of this type are called superscalar processors. The first computer to be equipped with this type of processor was Seymour Cray's CDC 6600 in 1965. The Pentium is the first superscalar processors for PC compatible.

Today, designers of processors are not looking simply to run multiple independent instructions simultaneously, they seek to optimize the execution time of all instructions. For example, the processor can sort the instructions so that all its pipelines contain instructions independent.This mechanism is called the performance out-of-order. This type of processor has become the engine for general public from the 1980s until the years 19905. The canonical example of such a pipeline is that of a RISC processor, in five steps. The Intel Pentium 4 has 35 floors of pipeline6. A compiler optimized for this kind of processor will provide a code that will run faster. To avoid loss of time due to pending new instructions, and especially within the context reloading between each change of threads, fondeurs7 added to their processors optimization methods that threads can share the pipeline, caches and registers. These processes, collectively known Simultaneous Multi Threading, have been developed in the 1950s. By cons, for higher performance, compilers should take into account these processes, we must re-compile the programs for these types of processors. Intel began to produce the early 2000s, the SMT processors that run at two tracks. These processors, Xeon processors can simultaneously execute two threads that share the same pipelines, caches and registers. Intel called this two-way SMT:

Hyperthreading. Super-threading is, in turn, an SMT in which multiple threads also share the same resources, but these threads only run one after the other and not simultaneously. Have long existed the idea of multiple processors to coexist within a single component, such as System on Chip. This was, for example, to add to the processor, FPU, DSP, or a cache memory, possibly even the entire components found on a motherboard. Processors using two or four cores are therefore emerged, such as the IBM POWER4 released in 2001. They have the technologies mentioned previously. Computers that have this type of processors are cheaper than buying an equivalent number of processors, however, the performances are not directly comparable, it depends of the problem. Specialized APIs have been developed to make the best use of these technologies, such as the Intel Threading Building Blocks.

Date: the year of the microprocessor market.
Name: the name of the microprocessor.
Number of transistors: the number of transistors contained in the microprocessor.
Manufacturing process (μm): the diameter (in micrometers) of the smallest wire connecting two components of the microprocessor. In comparison, the thickness of a human hair is 100 microns.
Clock frequency: the frequency of the clock of the motherboard that the CPU speed. MHz = million (s) of cycles per second. GHz = billion (s) of cycles per seconds.
Width of data: the first number indicates the number of bits on which a transaction is made. The second number indicates the number of bits transferred between both memory and microprocessor.
MIPS: the number of million instructions performed by the microprocessor in a second.

1971 4004 2300108 kHz 4 bit / 4-bit bus
1974 8080 6 000 6 2 MHz 8 bit / 8 bit bus 0.64
1979 8088 29 000 May 3 MHz 16-bit / 8 bit bus 0.33
1982 80286 134 000 1,5 6-16 MHz (20 MHz AMD) 16-bit bus bits/16 1
1985 80386 275 000 1.5 16-40 MHz 32-bit bus 5 bits/32
1989 80486 1 200 000 1 16-100 MHz 32-bit bus 20 bits/32
Pentium 3.1 million in 1993 from 0.8 to 0.28 60-233 MHz 32-bit bus 100 bits/64
Pentium II 1997 7,500,000 0.35 to 0.25 233-450 MHz 32-bit bus 300 bits/64
1999 Pentium III 9.5 million from 0.25 to 0.13 450-1 400 MHz 32-bit bus 510 bits/64
Pentium 4 2000 42,000,000 0.18 to 0.065 from 1.3 to 3.8 GHz 32-bit bus bits/64 1700
2004 Pentium 4D "Prescott" 125,000,000 0.09 to 0.065 2.66 to 3.6 GHz 32-bit bus bits/64 9000
Core 2 Duo ™ 2006 291 000 000 0.065 2.4 GHz (E6600) 64-bit bus bits/64 22,000
Core ™ 2 Quad 2007 2 * 291 000 000 0.065 3 GHz (Q6850) 64-bit bus bits/64 2 * 22 000 (?)
2008 Core 2 ™ Duo (Penryn) 410 000 000 0.045 3.33 GHz (E8600) 64-bit bus bits/64 ~ 24,200
2008 Core ™ 2 Quad (Penryn) 2 * 410 000 000 0.045 3.2GHz (QX9770) 64-bit bus bits/64 ~ 2 * 24 200
2008 Intel Core i7 (Nehalem) 731 000 000 0.045 (2008)
0.032 (2009) 2.66GHz (Core i7 920)
3.33GHz (Core i7 Ext. Ed. 975) 64-bit bus bits/64?
2009 Intel Core i5/i7 (Lynnfield) 774000000 0.045 (2009)
2.66GHz (Core i5 750)
2.93GHz (Core i7 870) 64-bit bus bits/64?
2010 Intel Core i7 (Gulftown) 1170000000 0.032 3.33GHz (Core i7 980X) 64-bit bus bits/64?

Families of Microprocessors

Microprocessors are usually grouped into families, according to the set of instructions they execute. This game includes instructions, often a common basis for the entire family, often the most recent microprocessors families have new instructions. Backward compatibility within a family is not always assured. For example a program known for writing x86 compatible processor 80386, which allows memory protection, could not not work on earlier processors, but works on all newer processors (eg a Core Duo or Athlon d AMD).

Families of Microprocessors

The family best known by the general public is the x86 family, developed primarily by companies Intel (maker of the Pentium), AMD (Athlon manufacturer), VIA and Transmeta. The first two companies dominate the market and they make the largest share of microprocessors for personal computers compatible PC. Intel also supplies the microprocessors for the Macintosh computers since 2006.

PowerPC microprocessors from IBM and Motorola team until 2006 Macintosh computers (made by Apple). These microprocessors are also used in the P-series servers and IBM in various embedded systems. In the field of gaming consoles, the PowerPC microprocessors derivatives equip the Wii (Broadway), GameCube (Gekko) Xbox 360 (three hearts secondary named Xenon). The PlayStation 3 is equipped with the Cell microprocessor, derived from POWER4, PowerPC architecture close.
The company's 6502 MOS Technology was used to produce the famous Apple II.
The Zilog Z80 microprocessor has been widely used in the 1980s in designing the first personal computers as the 8-bit Radio Shack TRS-80, the Sinclair ZX80, ZX81, ZX Spectrum, Apple II, with a daughter card, the Standard MSX, Amstrad CPC and the latest in embedded systems.
Family 6800 the company Motorola.

The family 68000 (also known m68k) Motorola animated the old Macintosh, Sega Genesis, Atari ST and Commodore Amiga. Derivatives (Dragonball, Coldfire) are still used in embedded systems.
Among the smaller families known to the general public:
The family Sparc animates most of the servers and workstations from Sun Microsystems, though more and more new products are made on x86.
The family of HP PA-RISC and VLSI Technology, animates the old servers and workstations from HP, now replaced by the family IA-64
IA-64 family of HP and Intel, brings 64-bit servers and workstations from HP
The family runs MIPS workstations from Silicon Graphics, game consoles like the PSone, Nintendo 64 and embedded systems, as well as Cisco routers. This is the first family to offer a 64-bit architecture with the R4000 in 1991. The Chinese foundry Loongson processors are based on a new generation of MIPS Technologies, used in supercomputers and low-power computers.
The ARM family is nowadays only used in embedded systems, including many PDAs and smartphones, it has previously been used by Archimedes and Acorn for his RiscPC.
The animated family DEC Alpha computers in December, then taken over by Compaq, which HP has definitely stopped.

Fast Instruction Execution in Operating Frequency

The microprocessors are clocked by a clock signal (oscillating signal requiring a regular rhythm to the circuit). In the mid-1980s, this signal had a frequency of 4 to 8 MHz. In the 2000s, this frequency is 4 GHz. Over this frequency, the higher the microprocessor can execute at a high rate basic instructions of programs.

Increasing the frequency drawbacks: higher it is, the processor consumes more power, and it heats more: it means having a CPU cooling solution developed; particular frequency is limited by the switching time of logic gates: it is necessary between two formidable clock, the digital signals have time to travel all the way necessary for the execution of the statement expected and for faster processing, requires action on many parameters (size of a transistor, electromagnetic interactions between the circuits, etc..) it becomes increasingly difficult to improve (and ensuring reliability operations).

Overclocking

The overclocking is to force the increase in the frequency of the clock signal from the microprocessor (compared to the manufacturer's recommendations) in order to execute more instructions each second.

Optimization of the Execution Path

Current microprocessors are optimized to run more than one instruction per clock cycle, they are microprocessors with threads in parallel. In addition they have procedures that "anticipate" the following instructions with the help of statistics.

In the race to the power of microprocessors, two optimization methods are in competition:
The RISC (Reduced Instruction Set Computer, a simple set of instructions) and fast with simple instructions for standard size, easy to manufacture and which you can mount the clock frequency without too many technical difficulties.

Technology CISC (Complex Instruction Set Computer), each complex instruction requires more clock cycles, but which has at its heart a lot of instructions pre-wired.

However, with decreasing size of computer chips and faster clock rates, the distinction between RISC and CISC has almost completely disappeared. Where families trenches existed, are now observed as a microprocessor RISC provides internal structure of power while remaining compatible with type use CISC (Intel x86 family has been transitioning from an organization initially very typical a structure CISC. Currently it uses a very fast heart RISC, based on a rearrangement of the code on the fly), implemented in part through caches become larger, with up three levels.

Structure of a Microprocessor

Main article: Architecture and processor microarchitecture.
The central unit of a microprocessor includes mainly:
an arithmetic logic unit (ALU) which performs operations;
registers that allow the microprocessor to store data temporarily;
a control unit which controls the entire microprocessor based on the instructions of the program.
Some records have a very specific role:
register status flag (flags), this register gives the status of the microprocessor at any time, it can only be read;
the program counter (PC Program Counter) contains the address of the next instruction to execute;
the stack pointer (SP Stack Pointer) is the pointer to a special area of memory called the stack where the arguments are stored sub-programs and return addresses.
Only the Program Counter is essential, there are (few) processors having no status register or not stack pointer (eg NS32000).
The control unit can also be decomposed:
register instruction, stores the instruction code to execute;
the decoder decodes the instruction;
the sequencer executes the instruction, he will control all organs of the microprocessor.

Manufacture of Microprocessors

The fabrication of a microprocessor is essentially identical to that of any integrated circuit. It follows therefore a complex process. But the enormous size and complexity of most microprocessors tends to increase further the cost of the operation.

Moore's Law

Moore's Law, which states that the degree of integration of microprocessors doubles every 18 months, also indicates that production costs at the same time doubling the degree of integration. The manufacturing of microprocessors is now considered one of the two factors increase the capacity of manufacturing facilities (with the constraints related to the manufacture of large-capacity storage). The fineness of the engraving industry reached 45 nm 20 068. By further reducing the fine engraving, smelters face to the rules of quantum mechanics.

Multiple Parallel Processors

According to the operating system, the current trend is to install multiple parallel processors and multiple tasks where the growing importance of trade-offs between process functions (eg hyper threading). In fact, the superscalar architecture (paralleling of tasks in an execution unit) of current processors currently does more multi-threading as used. However, processors that require multiple cores should be studied closely the distribution of tasks between them so that we do not see see a slowdown in transactions, this is called the affinities between processors (processor affinity).

Security and Rental

There are many integration projects at the heart of the microprocessor functions to prevent unauthorized copying of files (DRM). The consortium Trusted Computing Group, in particular, has already created chips to create a "zone of confidence" in the computer system, using a specific identification chip. Some computer models, such as IBM laptops already include such chips. The next generation of this technology will probably be integrated in the CPUs of computers. These technologies are decried, especially by supporters of free software, for which they have a potential of civil liberties. Indeed, combined with an operating system for this purpose, for example derived from Microsoft Project NGSCB, this technology enables the trusted third party (the provider that will check the validity of the system components) to access distance inside the computer, or even prevent the execution of some operations on it.

Apple Computer

Apple, formerly Apple Computer (NASDAQ: AAPL) is an American multinational computer company, whose headquarters is located in Cupertino, Silicon Valley. The firm became famous with his Apple II personal computer (1977), then the Macintosh line (1984) 2. In 2001, Apple has diversified its activities by venturing into the music industry with the iPod and the iTunes Store (2003) two products designed for digital music, then to the mobile with the iPhone (2007). The brand is known for its simple user interface and sleek design of its products, and also for its ability to inform the public of existing technologies by making them accessible even to a lay audience. This was the case with the graphic interface with windows, mouse and more recently the Multitouch screen.

History of Apple

Apple was established on 1 April 1976 in Cupertino, then incorporated as a company Jan. 3, 1977. Some years before its inception the company as it exists in 2010, Apple has had various facets related to the evolution of the computing world, from a world without personal computers to a company of twenty-first century by interconnected through fixed and mobile devices. His story is particularly related to that of one of its co-founders, Steve Jobs, forced to leave the firm in 1985, then rehired in December 1996 before becoming CEO of the company in 1997. Among the key products Apple has experienced since its inception, are the Apple I and II, the Macintosh, iPod, iPhone and iPad.

The "Apple culture"

The meetings with the public. Apple has always favored a technical trade reconciliation with its consumers and potential consumers. This is part of an overall marketing technique aimed, among other things, to give the impression to the consumer to be part of a community of users close to the computer company. There are several important annual meetings between Apple, its customers, developers and especially the press. Each opened with a video projection where Steve Jobs presents the financial results of the company and generally new products. The importance of these major meetings fluctuates, and in 2000, the three most important were the Macworld Expos New York (now defunct), San Francisco and Tokyo.

Macworld Conference & Expo San Francisco takes place in January. Apple is mostly announced new products geared toward the general public. The last took place from 14 to 18 January 2008. The MacWorld Expo 2009 was the last and took place without Steve Jobs, replaced by Phil Schiller (Vice-President of the company). According to an official statement, the website and their stores are sufficient to ensure the success of the brand.

Worldwide Developers Conference (WWDC): annual event of great importance, it was then that Apple would unveil the main novelties of the year (usually software oriented and professional audience). The last was held in San Francisco from June 7 to June 11, 2010 which was unveiled the iPhone 4.

Apple Expo: Launched in 1984, it took place in Paris (Porte de Versailles) in September, is the first appointment Mac in Europe, and the first rendezvous IT & Digital France. Its importance in communicating the publisher has fallen in recent years: Apple has ceased to hold a keynote since 2006 and announced his departure from the show's 2008 edition, held from 17 to 20September 2008.

Thus, Apple has phased out all these conferences, in favor of cheaper ways of rapprochement with the users, such as Apple Store and their official website. However, the decision of this tradition gave rise to a certain sadness among fans of the brand, who see the evaporation of the friendly atmosphere that maintaining old Apple. Still, unless Apple continues to hold regular keynote to present major new products.

The first version of the logo represented Isaac Newton under an apple tree from which hung. Very quickly, it is replaced by an apple in the colors of the rainbow in the sky bitten on the right side, designed by Rob Janoff. The visual features of the logo appear as the result of a systematic walk-cons made from the logo of IBM, Apple's main competitor at the time. Indeed, the silhouette of the apple logo gives Apple a simple configuration and understood in a general block, while IBM is in the form of a triptych. Secondly, the shapes of the fruit are completely constructed from curves, straight-based IBM. Third, the chromatic sequence type ABBA Apple: warm colors in the center, emphasizing the pinch of the apple cold outside. The sequence in the case of IBM is repetitive (BABA) and two colors: the bands are disjoint cool colors like blue. The apple is bitten not to be confused with the logo of Apple Corps record label founded by the Beatles but the story says that the apple is bitten not to be confused with that of a tomato.

In his history of Apple, entitled The Third Apple, Jean-Louis Gassée Alan Turing mentions several times but did not mention an iconic tribute to him there are three apples, the fruit of the tree of knowledge ( myth of Adam and Eve), Isaac Newton's apple (besides the organizer called the Apple Newton), and finally the apple of Apple. The foundation of Apple prone to myths, the newspaper reported SVM Mac also the legend according to which the poverty in which both Steve lived in their infancy have led to high consumption of apples, Steve Jobs has a particular liking for McIntosh , which tended to get a tan.

Another legend says that Steve had two months of delay to give a name to his business and then threatened to call his company "Apple Computer" if his colleagues did not do him a good name suggestion before 17 pm. Finding nothing the company became Apple. Another reason mentioned was the desire to appear before its competitors in alphabetical order, the first time since the Atari.

Rob Janoff met Steve Jobs for the first time when he worked in Palo Alto, in the PR firm Regis McKenna. He was in charge of designing a logo for a friend of his boss, Steve Jobs. "For inspiration, the first thing I did was go to the supermarket to buy a bag of apples and slice them," recalls Janoff. The fruit of his work: an apple monochromatic 2D, with a small bite on the right side. Jobs liked the concept, although he suggested would have been better a bit more colorful. Janoff's boss disagreed, insisting that a black logo would be less expensive to print.But Jobs was determined, arguing that the color was the key to humanizing the company "continues Janoff.

From 1997 to 2001 the logo changed again: the form remains the same (although sometimes it is adorned with a slight relief effect), but the reason rainbow is replaced by a monochrome tone, varying by product door. The battle with IBM was over. Apple is now a symbol catchy, as is Sony or Nike. This change is intended to give the company an image more in line with its ambitions in the professional market. The apple colorful, evoking for many the hippie movement, the sounds at the opening of windows or the "Happy Mac" (previously shown at startup of the Macintosh), like other icons of Susan Kare, had gone out of fashion, while logo embossed gloss gives the brand an image of luxury and quality. On computers, the iMac is the first to replace the apple with a monochrome color on its hull. And the operating system Mac OS X which completes the work by making monochrome apple at the corner of the menu bar of the computer.

Between marketing, innovation and choice

Apple has a reputation for providing systems easy to use, intuitive and integrates perfectly stable operating system, which in turn fits perfectly with the machine. This is due to Apple's closed policy, which chooses its own hardware and peripheral products. This is probably the main advantage of vertical product strategy led by the mark, despite the drawbacks of such a strategy for the customer and the company. Apple has upgraded its distribution network-house, opening his own store, Apple stores.Although this initiative to enter the field of retailing has received a mixed reception from the independent dealers, this strategy proved successful. Indeed, sites promoting "Apple brand" responsible for a concept design that combines technology and simplicity accents, Apple Stores contribute to the identity of the brand. Naomi Klein identifies in his book No Logo as one of the most sophisticated of modern times, tied with Nike. In 1984, Apple gets George Orwell in an advertisement against IBM. Later, Gandhi Think different testifies in favor of the mark in its publicités5.

Organization

It alleges that Apple's model of vertical development, which goes against the requirements of most economists, especially for IT. Despite this, the company released a profit. Apple is also criticized because it depends heavily on the personality that directs it, especially during two eras of Jobs. Some argue that Steve Jobs is the subject of a cult of personality, or at least between some elements of such a cult in its relationship with its customers, and it maintains about him a field distortion of reality.

Apple is still criticized for a closed architecture, and its refusal standards: we use the term syndrome of not-invented-here. This criticism is out of place, however, most electronic components of its computers are common to the entire computer industry. In addition, the operating system utilizes a number of popular technologies (1-4 MPEG, OpenGL, programming free). Finally, the time when the policy choice of technology existed, it was not objectionable in itself, because research and innovation are engines of development, each company seeks to recoup its research costs and development by setting market his inventions.In addition, several technologies Apple first used by Apple are then generalized in the personal computer (FireWire ZeroConf (Hello, ex-Apple Rendezvous)). Similarly, Apple has enabled and accelerated the adoption of innovations by generalizing from a blow to its range (3.5-inch floppy disks, SCSI, USB, Wi-Fi (AirPort)).

Some analysts criticize the competition within Apple itself, between different programmers programming environments, those in Cocoa, NeXTSTEP heirs, and those of Carbon derived from Mac OS 9. This rivalry is seen as cons-productive, as was his time in the rivalry between the teams Apple II and Macintosh. A less commonly heard criticism [ref. necessary], because on most businesses and individuals, concerning the lack of a "Roadmap" (Planning announced) on Apple marked its software technologies. IT departments of large companies expect from a publisher to clearly announce what will tender its software in the next five years, in order to make investment choices in the medium term. However, the allegation that Apple is not really announce what will be the stages of its evolution beyond a year, unlike its competitors like Microsoft. A simple example is if, for the abrupt announcement of the abandonment of PowerPC processors from IBM / Motorola in favor of those from Intel: publishers need to update their applications, while owners of ParkMicro will have to pay these updates, and will manage two lines of machines. An announcement earlier had allowed them to anticipate the phenomenon, in planning the park renewals based on the torque hardware / software in order to avoid additional costs for updating.

The cancellation of the regular development of promising technologies yet (eg OpenDoc) has also left on the roadside of developers, tired of investing time and money in "dead ends". As a result, the number of independent developers and publishers declined during the 1990s. This criticism is outdated since the introduction of Mac OS X, which has returned to the bosom of many Apple developers from the Unix / Linux and free software.

Moreover, the management system of after-sale in Europe, told a single provider for laptops and G5, proved to be one of the weaknesses of the trading system Apple. Following significant delays in the early 2000s, the service provider went bankrupt in early 2005, forcing Apple to use technicians to the retailers and causing further delays in repairs.

Price

Finally, Apple is often accused of pursuing a policy of price too high. The price of a Macintosh could often be twice that of a PC / IBM compatible in the 1980s, or even three times in the 1990s after the advent of the Pentium. This policy of high prices has probably hampered the development of the Macintosh in favor of the PC and consumer multimedia computers of the era, such as the Amiga and the Atari. Today, Apple posted rates are often more expensive and represent a barrier for many users wishing to make the "switch", that is to say, from Windows to Mac OS X, even if the output Mac mini is an initiative which again opens a field in the area.
It is true that the margins applied by Apple are much higher than those generally practice in this area (between 25% and 30% gross margin in the early 2000s, when some PC makers are content to 8% or less). However, a Gartner Group study commissioned by Apple Australia and distributed by him in the press in 2002, stated that the TCO (Total Cost of Ownership) or TCO, that is to say the total cost of computer equipment when you add optional hardware, software (licenses), support (etc.) is lower with a Mac than on a PC running Windows. Balanced review later by Gartner, which said that the information contained in the report did not reflect the editorial position and were intended for internal use at Apple, corresponding to a specific scenario.

Environment

In late 2006 and early 2007, Apple was ranked twice by Greenpeace as the last in a ranking of fourteen companies manufacturing electronic products, on criteria such as environmental waste management, recycling of obsolete products, the use of polluting components or communication to the general public on these sujets6, 7. Apple denies this classification in an open letter from Steve Jobs8. The company says make for several years in shares environnemental9. Sites devoted to the Macintosh have on several occasions discussed the environmental aspects of Apple and the use of the image of Apple by Greenpeace10, 11.Greenpeace France has recently shown (May 2007) by organizing a demonstration outside an Apple dealer even though the international section of the association has rebounded in early May on Apple's ranking with an average of 5 / 10 following the letter of Steve Jobs12. In March 2008 Apple is located in mid-table, with a score of 7 / 1013.
The Cupertino company has responded fairly quickly but not for the iPhone. In announcing the new iMacs August 7, 2007, Steve Jobs's Apple event begins in these words: "Ladies and gentlemen, here is the new iMac, it is much more environmentally friendly and recyclable ...". The white polycarbonate and is replaced by anodized aluminum components and glass facades. The iPhone has also been criticized by Greenpeace, which denounced the extremely toxic materials inside it. Apple has not responded and seems in no hurry to make its iPhone more environmentally friendly. Indeed, Apple has replaced the aluminum hull aft of the iPhone by a polycarbonate shell for iPhone 3G.